Tuesday, November 24, 2009

Eco-Depot Code Review

I performed a review on the Eco-Depot system. My summary is at the bottom if you don't wish to read through all the details, otherwise please make suggestions back to me about the quality of my review, I'd like to hear it.


Eco-Depot Review
reviewer: Remy Baumgarten


A. Review the build

The first part of any review is to verify that the system builds and passes automated quality assurance.

A1. Download the system, build it, and run any automated quality assurance checks (use "ant -f verify.build.xml"). If any errors occur, note these in your review.

BUILD SUCCESSFUL

B. Review system usage

If the system builds, the next step is to play with it.

B1. Run the system. Exercise as much of its functionality as possible. Does it implement all of the required features?

I ran the application with 11/23/2009 and it worked as it is supposed to.

B2. Try to break the system by providing it with unexpected input. Can you make the system crash or generate a stack dump? If so, note the input that caused the failure.

By deleting the date, and clicking submit it goes to 11/24/2009 instead of today, 23 or erroring out.

B3. Evaluate the user interface to the system. If it's a web app, is the interface self-explanatory? Does the system require unnecessary scrolling to see the data, or can it all fit in a single screen? Does it seem "professional", or does it look "amateurish"?

Data fits in 1 page. Decimal places should be eliminated. The key is good, it lets us know what we are looking at. A bit of CSS/JS code would be nice.

C. Review the JavaDocs.

Download the system and generate the javadocs (use "ant -f javadoc.build.xml"). Navigate to the build/javadoc folder and click on index.html to display the JavaDocs in a browser. Read through the JavaDocs and assess the following:

C1. Does the System Summary (provided in an overview.html file) provide an high-level description of the purpose of the system? Does it explain how each of the packages in the system relate to each other? Is the first sentence self-contained?

"A Wicket application of EcoDepot build system." This should be elaborated to describe the application briefly.

C2. Do the Package Summaries (provided in package.html files) provide a high-level description of the purpose of the package? Do they explain how the classes in the package related to each other? Is the first sentence self-contained?

Yes. This is very helpful in understanding what packages purposes are.

C3. Do the Class Summaries (provided at the top of each .java file) provide a high-level description of the purpose of the class? Does it provide sample code for clients of the class, if useful? Is the first sentence self-contained?

There is a highlevel description. There is no sample code. This would be helpful if someone wanted to use your class.

C4. Do the Method Summaries (provided before each method) explain, from a client-perspective, what the method does? Do they avoid giving away internal implementation details that could change? Do they document any side-effects of the method invocation? (Note that you can click on the method name to see the source code for the method, which is helpful to assessing the correctness and quality of the javadoc.)

The methods provided what I found sufficient documentation. setThresholds is a good example of this.

C5. Please review Chapter 4 of the Elements of Java Style for additional JavaDoc best practices, and check for compliance.

Very good.

D. Review the names

One of the most important (if not the most important) form of documentation in any software system is the choice of names for program elements, such as packages, classes, methods, instance variables, and parameters. Due to evolution in requirements and design changes, the name originally chosen for a program element may no longer be appropriate or optimal. An important goal of review is to ensure that the names of program elements are well suited to their function. Due to the refactoring capabilities in modern IDEs such as Eclipse, renaming need not be a burden.

D1. Do another pass through the JavaDocs, this time concentrating on the names of packages, classes, and methods. Are these names well chosen? Do they conform to the best practices in Elements of Java Style, Chapter 3? Can you propose better names?

I would propose the use of names that are not prepended by the name of the project, since these files are already in the project names package, it is redundant.

D2. Once you have reviewed the names displayed in the JavaDocs, review the source code for internal names in the same way.

Looking throughout the source I found the names to explain exactly what their purpose reflects. Easy to follow.

E. Review the testing.

The system should provide a useful set of test cases.

E1. Run Emma or some other code coverage tool on the system ("ant -f emma.build.xml"). Look at the uncovered code in order to understand one aspect of the limitations of the testing.

emma reports ~65-85% coverage. There are 4 test cases. They test for various conditions, not strictly happy paths. A few more tests to increase the coverage and the separation of tests into their own classes would be nice.

E2. Review the test cases. Is each component of the system exercised by a corresponding test class? Do the test cases exercise more than "happy path" behaviors?

Described above.

E3. Review the test output. Under default conditions, the test cases should not generate any output of their own. It output is desired for debugging purposes, it should be controlled by (for example) a System property.

There is no extraneous output.

F. Review the package design

The JavaDoc review is focussed on whether the system design is correctly explained. In this section, you start to look at whether the system design is itself correct.

F1. Consider the set of packages in the system. Does this reflect a logical structure for the program? Are the contents of each package related to each other, or do some package contain classes with widely divergent function? Can you think of a better package-level structure for the system?

After reviewing this requirement, I found that the classes are logically divided, yet lack packages.

G. Review the class design

Examine each class implementation with respect to at least the following issues.

G1. Examine its internal structure in terms of its instance variables and methods. Does the class accomplish a single, well-defined task? If not, suggest how it could be divided into two or more classes.

Yes, classes are logically divided.

G2. Are the set of instance variables appropriate for this class? If not, suggest a better way to organize its internal state.

Yes, the instance variables are related to the class.

G3. Does the class present a useful, but minimal interface to clients? In other words, are methods made private whenever possible? If not, which methods should be made private in order to improve the quality of the class interface to its clients?

There are public methods, but they should be public because they are needed by other classes. However, a restructure of this class could allow one method to be public and use internal methods (private) within.

H. Review the method design

Examine each method implementation with respect to at least the following issues.

H1. Does the method accomplish a single thing? If not, suggest how to divide it into two or more methods.

Yes, however I suggest refactoring as a wicket application for creating HTML and embedding CSS and use things such as labels, ListViews/Repeaters (for tables) instead of building strings to display on index.

H2. Is the method simple and easy to understand? Is it overly long? Does it have an overly complicated internal structure (branching and looping)? If so, suggest how to refactor it into a more simple design.

Displayed above.

H3. Does the method have a large number of side-effects? (Side effects are when the result of the method's operation is not reflected purely in its return value. Methods have side-effects when they alter the external environment through changing instance variables or other system state. All "void" methods express the results of their computation purely through side-effect.) In general, systems in which most methods have few or zero side-effects are easier to test, understand, and enhance. If a method has a large number of side-effects, try to think about ways to reduce them. (Note that this may involve a major redesign of the system in some cases.)

The methods are mainly self contained.

I. Check for common look and feel

I1. Is the code implemented consistently throughout the system, or do different sections look like they were implemented by different people? If so, provide examples of places with inconsistencies.

The code looks consistent.

J. Review the documentation

J1. Does the project home page provide a concise and informative summary of what the system accomplishes? Does it provide a screen dump that illustrates what the system does?

There is no screenshot or a clear description on the home page.

J2. Is there a UserGuide wiki page? Does it explain how to download an executable binary of the system? Does it explain the major functionality of the system and how to obtain it? Are there screen images when appropriate to guide use? Try following the instructions: do they work?

The userguide is detailed, and it explains exactly what the applications purpose is.

J3. Is there a DeveloperGuide wiki page? Does it explain how to download the sources and build the system? Does it provide guidance on how to extend the system? Try following the instructions: do they work?

Yes this was well written. It even describes how to setup the development environment and includes a link to the formatting xml configuration.

K. Review the Software ICU data

For this step, you must ask the Hackystat project owner to add you as a spectator to their project so you can invoke the Software ICU for their system. Run the Software ICU on the project.

K1. Is the Software ICU gathering data consistently and reliably? If not, what data appears to be missing, and why might it be missing?

Yes the data is present and shows a little data.

K2. If data is present, what does it tell you about the quality of the source code? Is the current level high, medium, or low? What are the trends in quality?




Coverage improved, complexity increased, coupling improved, churn went down a lot probably because of a lot of large commits. Dev time only shows one spike and well as commit and the build shows one spike as well. It looks like this project was done last minute from the looks of the stats.

K3. If data is present, what does it tell you about the group process? Are all members contributing equally and consistently?

It looks like 3 members contributed and one didn't build at all, but this could be due to improper setup with sensors.

L. Review Issue management

Go to the project home page, then click on the Issues tab. Next, search for "All Issues" to retrieve a page containing all issues, both open and closed. Next, select "Grid" view, and select "Owner" for the rows. The result should be a page of issues in a grid layout, where each rows shows all of the issues (both open and closed) for a particular project member.

L1. Does the issue management page indicate that the project members are doing planning? In other words, that they have broken down the project into a reasonable set of issues (i.e tasks taking one day or longer are represented by an issue)?

There are no issues present.

L2. Does the issue management page indicate that each member is contributing to the project? Does each member have at least one current "open" task? Has each member completed a reasonable number of tasks?

There are no issues present.

M. Review continuous integration

Go to the Hudson server, login, and select the continuous integration job associated with the project.

M1. Is the system being built regularly due to commits? Are there long periods (i.e. longer than a day) when there are no commits?

When the group started, there was a period of 5 days without any commits, and then activity toward the end.

M2. If the system build fails, are there situations in which the system stayed in a failed state for a long period of time (i.e. longer than an hour)?

Failed builds were fixed immediately.

M3. Is there a continuous integration job? If so, check the configuration. Is it correct? Will it be triggered by a commit? Check the console output from one of the invocations. Are the appropriate Ant tasks (i.e. checkstyle, pmd, junit, etc.) being executed?

Yes, it pulls svn every 5 minutes and builds with the ant tasks.

M4. Is there a daily build job? If so, check the configuration. Is it correct? Will the job be run regularly once a day? Check the console output from one of the invocations. Are the appropriate Ant tasks for a daily build job (i.e. coverage, coupling, complexity, size, etc.) being executed?

Yes. Based on the console output all the tasks are being run.

Summary

Based upon the above sections, provide an overall summary of your perspective on the project. Does it fulfill the requirements, and if not, why not? Is the code of reasonable quality based upon its design, implementation, and quality assurance techniques? If not, what should be improved? Does the group appear to be functioning well based upon Software ICU, Issue, and Continuous Integration data? If not, what appears to be wrong and what should they do to improve?

Good job on this project. I think working on this before it's too late might make it easier later if time is spent learning the wicket way of doing things, so time isn't wasted rewriting code. A little CSS would help too as well as packing the product appropriately. There are some great tutorials here http://w3schools.com/ that will help with CSS. I think this project has a lot of potential given time and effort. Good work.

0 comments:

Post a Comment