Thursday, December 17, 2009

Software Engineering: My Thoughts

I really enjoyed this Software Engineering course. It was by far the most intensive, time consuming and involved class that I have taken so far. It required a lot of sacrifices in my personal life and other classes, however, it would not of been possible to do if I didn't get myself head deep in it. I enjoyed learning all the tools to make the process of developing software efficient and productive.

I found the group work most enjoyable because it's a lot more fun to see things get brought together quicker. I also liked the group aspect when everyone gets really involved and motivates each other. Of the three projects we worked on, I felt that wattdepot was the most enjoyable because it didn't have much of a user interface design aspect to it. Design to me is subjective and when you pour time and effort into something you think and everyone else agrees is great, someone is bound to not like it.

I am looking forward to Software Engineering 2 where we will be building one system the entire semester. Hopefully this will be a more open environment where the developers can set their goals, specs and project, I feel this with some direction would produce the best learning environment because the developers must love what they are doing.

I'll see you in a few weeks,

Remy

WattWaiter 2.1 Release


WattWaiter 2.1 is now released. Version 2.0 had some issues as far as the user interface was concerned. These issues mostly dealt with very small resolution screens (1024x768). We have since fixed this and changed the overall appearance to reflect a more standard interface without much eyecandy.

If you are wondering what WattWaiter is read here:

"Wattwaiter is a web application that provides a visual display of carbon intensities for a given day. This information can be used by consumers to adjust their electrical usage and become “smart” consumers. By observing carbon intensities, consumers can determine which times during the day are most efficient for electrical usage. This benefits the consumer by allowing usage of lower costing electricity and the electric company by balancing their load."

This project was a learning experience. It seems that abstraction using Wicket, when dealing with web technologies, takes you away from what is really going on. It claims to offer state on top of the stateless protocol HTTP like it invented sessions or something. While it does do this, you must specifically implement this feature, otherwise users will share the same instance. PHP offers this right out of the box and spawns isolated threads for each connection. Wicket claims that it decreases development time and while it may after a lot of experience with it, it seems that recompiling all your files just to see the changes in your HTML, CSS or Java each time you edit source is quite annoying and time consuming. PHP compiles files on the fly, meaning I can change the a file and just refresh. When dealing with elements such as lists, you have to go through quite a process just to make that work.

However, I am quite biased since I have years of experience with the php/mysql/apache stack and find that it there is much more documentation and tutorials (which Wicket really lacks). That said, Wicket is a framework and should be treated as such. It aims to provide a better way to create web applications and for some I guess it does.

If I had to choose a framework for PHP, I would choose CodeIgniter because it is lightweight, easy to learn (a few hours and you've learned the whole framework) and provides various classes that you are likely to use often. It offers everything you could possibly need as well as easy object oriented interfaces to databases. It provides the abstraction you want with a MVC design and benchmarking is part of the package. It also allows you to create an app and package it up to bring it anywhere, you just edit the configuration on the new server and you are ready to launch.


Other PHP frameworks are a huge mess and have enormous libraries to drag around and load like PEAR. If you want rapid development in PHP, I suggest this lightweight option.

Stepping off the soapbox....

During the development of WattWaiter, I enjoyed the company of my co-developers, we seemed to work together quite well. As we divided up the work, everything fell into place because of good communication. We had daily meetings on IRC with a bot I wrote in Python that screenscraped the commit log from Google Code into our channel. It helped a lot because we wouldn't have to repeatedly say "I committed, update, see if that works"..."What did you update?....." etc. We used issue management to manage our tasks like any developer would and although we all know who is doing what, due to the small size of our group, it still served as a good reminder. If this wasn't there I would probably do someone else's work.



Here is our 2.0 screenshot:






Here is a 2.1 Screenshot:




Tuesday, November 24, 2009

Eco-Depot Code Review

I performed a review on the Eco-Depot system. My summary is at the bottom if you don't wish to read through all the details, otherwise please make suggestions back to me about the quality of my review, I'd like to hear it.


Eco-Depot Review
reviewer: Remy Baumgarten


A. Review the build

The first part of any review is to verify that the system builds and passes automated quality assurance.

A1. Download the system, build it, and run any automated quality assurance checks (use "ant -f verify.build.xml"). If any errors occur, note these in your review.

BUILD SUCCESSFUL

B. Review system usage

If the system builds, the next step is to play with it.

B1. Run the system. Exercise as much of its functionality as possible. Does it implement all of the required features?

I ran the application with 11/23/2009 and it worked as it is supposed to.

B2. Try to break the system by providing it with unexpected input. Can you make the system crash or generate a stack dump? If so, note the input that caused the failure.

By deleting the date, and clicking submit it goes to 11/24/2009 instead of today, 23 or erroring out.

B3. Evaluate the user interface to the system. If it's a web app, is the interface self-explanatory? Does the system require unnecessary scrolling to see the data, or can it all fit in a single screen? Does it seem "professional", or does it look "amateurish"?

Data fits in 1 page. Decimal places should be eliminated. The key is good, it lets us know what we are looking at. A bit of CSS/JS code would be nice.

C. Review the JavaDocs.

Download the system and generate the javadocs (use "ant -f javadoc.build.xml"). Navigate to the build/javadoc folder and click on index.html to display the JavaDocs in a browser. Read through the JavaDocs and assess the following:

C1. Does the System Summary (provided in an overview.html file) provide an high-level description of the purpose of the system? Does it explain how each of the packages in the system relate to each other? Is the first sentence self-contained?

"A Wicket application of EcoDepot build system." This should be elaborated to describe the application briefly.

C2. Do the Package Summaries (provided in package.html files) provide a high-level description of the purpose of the package? Do they explain how the classes in the package related to each other? Is the first sentence self-contained?

Yes. This is very helpful in understanding what packages purposes are.

C3. Do the Class Summaries (provided at the top of each .java file) provide a high-level description of the purpose of the class? Does it provide sample code for clients of the class, if useful? Is the first sentence self-contained?

There is a highlevel description. There is no sample code. This would be helpful if someone wanted to use your class.

C4. Do the Method Summaries (provided before each method) explain, from a client-perspective, what the method does? Do they avoid giving away internal implementation details that could change? Do they document any side-effects of the method invocation? (Note that you can click on the method name to see the source code for the method, which is helpful to assessing the correctness and quality of the javadoc.)

The methods provided what I found sufficient documentation. setThresholds is a good example of this.

C5. Please review Chapter 4 of the Elements of Java Style for additional JavaDoc best practices, and check for compliance.

Very good.

D. Review the names

One of the most important (if not the most important) form of documentation in any software system is the choice of names for program elements, such as packages, classes, methods, instance variables, and parameters. Due to evolution in requirements and design changes, the name originally chosen for a program element may no longer be appropriate or optimal. An important goal of review is to ensure that the names of program elements are well suited to their function. Due to the refactoring capabilities in modern IDEs such as Eclipse, renaming need not be a burden.

D1. Do another pass through the JavaDocs, this time concentrating on the names of packages, classes, and methods. Are these names well chosen? Do they conform to the best practices in Elements of Java Style, Chapter 3? Can you propose better names?

I would propose the use of names that are not prepended by the name of the project, since these files are already in the project names package, it is redundant.

D2. Once you have reviewed the names displayed in the JavaDocs, review the source code for internal names in the same way.

Looking throughout the source I found the names to explain exactly what their purpose reflects. Easy to follow.

E. Review the testing.

The system should provide a useful set of test cases.

E1. Run Emma or some other code coverage tool on the system ("ant -f emma.build.xml"). Look at the uncovered code in order to understand one aspect of the limitations of the testing.

emma reports ~65-85% coverage. There are 4 test cases. They test for various conditions, not strictly happy paths. A few more tests to increase the coverage and the separation of tests into their own classes would be nice.

E2. Review the test cases. Is each component of the system exercised by a corresponding test class? Do the test cases exercise more than "happy path" behaviors?

Described above.

E3. Review the test output. Under default conditions, the test cases should not generate any output of their own. It output is desired for debugging purposes, it should be controlled by (for example) a System property.

There is no extraneous output.

F. Review the package design

The JavaDoc review is focussed on whether the system design is correctly explained. In this section, you start to look at whether the system design is itself correct.

F1. Consider the set of packages in the system. Does this reflect a logical structure for the program? Are the contents of each package related to each other, or do some package contain classes with widely divergent function? Can you think of a better package-level structure for the system?

After reviewing this requirement, I found that the classes are logically divided, yet lack packages.

G. Review the class design

Examine each class implementation with respect to at least the following issues.

G1. Examine its internal structure in terms of its instance variables and methods. Does the class accomplish a single, well-defined task? If not, suggest how it could be divided into two or more classes.

Yes, classes are logically divided.

G2. Are the set of instance variables appropriate for this class? If not, suggest a better way to organize its internal state.

Yes, the instance variables are related to the class.

G3. Does the class present a useful, but minimal interface to clients? In other words, are methods made private whenever possible? If not, which methods should be made private in order to improve the quality of the class interface to its clients?

There are public methods, but they should be public because they are needed by other classes. However, a restructure of this class could allow one method to be public and use internal methods (private) within.

H. Review the method design

Examine each method implementation with respect to at least the following issues.

H1. Does the method accomplish a single thing? If not, suggest how to divide it into two or more methods.

Yes, however I suggest refactoring as a wicket application for creating HTML and embedding CSS and use things such as labels, ListViews/Repeaters (for tables) instead of building strings to display on index.

H2. Is the method simple and easy to understand? Is it overly long? Does it have an overly complicated internal structure (branching and looping)? If so, suggest how to refactor it into a more simple design.

Displayed above.

H3. Does the method have a large number of side-effects? (Side effects are when the result of the method's operation is not reflected purely in its return value. Methods have side-effects when they alter the external environment through changing instance variables or other system state. All "void" methods express the results of their computation purely through side-effect.) In general, systems in which most methods have few or zero side-effects are easier to test, understand, and enhance. If a method has a large number of side-effects, try to think about ways to reduce them. (Note that this may involve a major redesign of the system in some cases.)

The methods are mainly self contained.

I. Check for common look and feel

I1. Is the code implemented consistently throughout the system, or do different sections look like they were implemented by different people? If so, provide examples of places with inconsistencies.

The code looks consistent.

J. Review the documentation

J1. Does the project home page provide a concise and informative summary of what the system accomplishes? Does it provide a screen dump that illustrates what the system does?

There is no screenshot or a clear description on the home page.

J2. Is there a UserGuide wiki page? Does it explain how to download an executable binary of the system? Does it explain the major functionality of the system and how to obtain it? Are there screen images when appropriate to guide use? Try following the instructions: do they work?

The userguide is detailed, and it explains exactly what the applications purpose is.

J3. Is there a DeveloperGuide wiki page? Does it explain how to download the sources and build the system? Does it provide guidance on how to extend the system? Try following the instructions: do they work?

Yes this was well written. It even describes how to setup the development environment and includes a link to the formatting xml configuration.

K. Review the Software ICU data

For this step, you must ask the Hackystat project owner to add you as a spectator to their project so you can invoke the Software ICU for their system. Run the Software ICU on the project.

K1. Is the Software ICU gathering data consistently and reliably? If not, what data appears to be missing, and why might it be missing?

Yes the data is present and shows a little data.

K2. If data is present, what does it tell you about the quality of the source code? Is the current level high, medium, or low? What are the trends in quality?




Coverage improved, complexity increased, coupling improved, churn went down a lot probably because of a lot of large commits. Dev time only shows one spike and well as commit and the build shows one spike as well. It looks like this project was done last minute from the looks of the stats.

K3. If data is present, what does it tell you about the group process? Are all members contributing equally and consistently?

It looks like 3 members contributed and one didn't build at all, but this could be due to improper setup with sensors.

L. Review Issue management

Go to the project home page, then click on the Issues tab. Next, search for "All Issues" to retrieve a page containing all issues, both open and closed. Next, select "Grid" view, and select "Owner" for the rows. The result should be a page of issues in a grid layout, where each rows shows all of the issues (both open and closed) for a particular project member.

L1. Does the issue management page indicate that the project members are doing planning? In other words, that they have broken down the project into a reasonable set of issues (i.e tasks taking one day or longer are represented by an issue)?

There are no issues present.

L2. Does the issue management page indicate that each member is contributing to the project? Does each member have at least one current "open" task? Has each member completed a reasonable number of tasks?

There are no issues present.

M. Review continuous integration

Go to the Hudson server, login, and select the continuous integration job associated with the project.

M1. Is the system being built regularly due to commits? Are there long periods (i.e. longer than a day) when there are no commits?

When the group started, there was a period of 5 days without any commits, and then activity toward the end.

M2. If the system build fails, are there situations in which the system stayed in a failed state for a long period of time (i.e. longer than an hour)?

Failed builds were fixed immediately.

M3. Is there a continuous integration job? If so, check the configuration. Is it correct? Will it be triggered by a commit? Check the console output from one of the invocations. Are the appropriate Ant tasks (i.e. checkstyle, pmd, junit, etc.) being executed?

Yes, it pulls svn every 5 minutes and builds with the ant tasks.

M4. Is there a daily build job? If so, check the configuration. Is it correct? Will the job be run regularly once a day? Check the console output from one of the invocations. Are the appropriate Ant tasks for a daily build job (i.e. coverage, coupling, complexity, size, etc.) being executed?

Yes. Based on the console output all the tasks are being run.

Summary

Based upon the above sections, provide an overall summary of your perspective on the project. Does it fulfill the requirements, and if not, why not? Is the code of reasonable quality based upon its design, implementation, and quality assurance techniques? If not, what should be improved? Does the group appear to be functioning well based upon Software ICU, Issue, and Continuous Integration data? If not, what appears to be wrong and what should they do to improve?

Good job on this project. I think working on this before it's too late might make it easier later if time is spent learning the wicket way of doing things, so time isn't wasted rewriting code. A little CSS would help too as well as packing the product appropriately. There are some great tutorials here http://w3schools.com/ that will help with CSS. I think this project has a lot of potential given time and effort. Good work.

Monday, November 23, 2009

WattWaiter 1.0 Released

My software development group has been working on a Wicket application that queries a server to retrieve data from the Oahu Power Grid and display an interface for a particular days power consumption or carbon intensities. We named this program WattWaiter.

Here is a brief overview of what the project is about:

Wattwaiter is a web application that provides a visual display of carbon intensities for a given day. This information can be used by consumers to adjust their electrical usage and become “smart” consumers. By observing carbon intensities, consumers can determine which times during the day are most efficient for electrical usage. This benefits the consumer by allowing usage of lower costing electricity and the electric company by balancing their load.

Carbon intensities are displayed in hourly increments with green, yellow, and red flags corresponding to low, moderate, and high carbon intensities respectively. Users are now able to adjust their electricity usage to coincide with lower intensity periods. A sample Wattwaiter display is shown below.

Here is a screenshot of the early stages of WattWaiter:

Wicket is a bit rough to learn, but reading the Wicket book and the wiki on Wicket helped a lot. I was soon able to program the Wicket way. Coming from a PHP, MySQL background, I found this a bit overkill, but I am sure that this kind of setup will pay off when the application grows.

Working in this team is really nice because I like to see all the assigned tasks come together. Communication between members is also really fun because we got to develop strategies and gain others perspectives as well as work with others code.

Honestly, I am quite happy with the way our system was designed, but not being an expert or even novice at Wicket at this point, I can guess that there will be a lot of room for improvement and better ways to do things.

Here is a screenshot of the software ICU (intensive care unit) as described in a previous post. This will give you an idea of how well our project is performing.




As you can see, we need to commit more often (churn). The problem with this is if there is a big overhaul in the project, or we need to redesign something, there's no way around this, because you're not going to commit. Also we need to develop more knowledge about the Wicket testing framework. We are having some issues developing proper tests for this.

Our Project Page is hosted here: http://code.google.com/p/wattwaiter/

Our distribution is here: http://wattwaiter.googlecode.com/files/wattwaiter-1.0.1122.zip

Our Wiki, User Guides and Dev Guides are here: http://code.google.com/p/wattwaiter/w/list

Sunday, November 15, 2009

WattDepot Version 2.0 Released

WattDepot v2.0 implements a few more query options and employs an updated API for the wattdepotclient class. The necessary changes that were in our code reviewers notes were also satisfied. Since we already had the design correct, changes were minimal. We have a lot of unit tests but it seems that we probably need some more since our code coverage is at about 82%.

Teamwork in version two was just as good as it was in v1.0's development phase. We met regularly on IRC, and with the assistance of a python bot I wrote, we get commit logs posted directly into the channel. We discuss design and divide the work there and on the phone.

What was really interesting was the opportunity to employ a new tool called Hackystat to measure the health of our project.

Here is a screenshot:



Hackystat is a great way to monitor your project as it computes various statistics about how your project is performing.
"Hackystat is an open source framework for collection, analysis, visualization, interpretation, annotation, and dissemination of software development process and product data."
Hackystat works by relying on the user to install sensors in their project as well as their IDE. It dispatches data to the Hackystat server every 5 minutes (in IDE) and everytime ant is invoked. The kind of information that is transmitted is related to the files your working on and the time you spend on them.

This application can answer questions like these:
  • What day and time during the month was Oahu energy usage at its highest? How many MW was this?
November [2,3,4,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27] at 995MW
  • What day and time during the month was Oahu energy usage at its lowest? How many MW was this?
November [2,3,4,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27] at 493MW
  • What day during the month did Oahu consume the most energy? How many MWh was this?
There is no data for this at the moment.
  • What day during the month did Oahu consume the least energy? How many MWh was this?
There is no data for this at the moment.
  • What day during the month did Oahu emit the most carbon (i.e. the "dirtiest" day)? How many lbs of carbon were emitted?
November [4,5,16,17,30] at 29,959lbs

  • What day during the month did Oahu emit the least carbon (i.e. the "cleanest" day)? How many lbs of carbon were emitted?
November [7,8] at 22,908lbs

You can download our distribution here: WattDepot CLI v2.0

Wednesday, November 11, 2009

1-on-1 Code Reviews

The Elua Branch had one-on-one code reviews today. I found this mildly useful in the respect that both parties were able to elaborate a little more on the reports. However, I found the reports much more detailed, and therefore useful. I think code reviews should be about going over every line in the code after the reviewer has read and understood the program, perhaps more then the author.


The reviews in general from the reports were very helpful simply because when someone else looks at your code, they notice things you didn't. For the next project I am hoping to get more time with the reviewer and hope they read the code, instead of glance at it looking for specific conventions.

Sunday, November 8, 2009

Code Review

It was my task to review the code of two watt-depot branches. I found that the projects were not completed to specification and hard to review because of this. However, I'll attempt to give insight and advice on what needs to be done, in terms of error handling and design.

Ewalu - The commands are all implemented and working, good job. I think this code needs a little more time reorganizing and it will be a really good project. Exception handling for errors needs to be worked on and error messages need to be given. There are NumberFormatException's, ArrayIndexOutOfBoundsException's and some design issues to be worked on. Design issues involving remapping the structure of the project into logically divided packages and classes. Currently the source is mostly in one file. Test cases should also contain tests that check how the program responds to invalid input and exceptions.
Click for Report

Ehiku - The Ehiku branch needs a bit more time as well to implement and convert the rest of the commands into a more organized project consisting of classes and branches. It seems currently the transition is underway in the way the commands are parsed and dispatched. I'd like to see this design completed. There are some mishandled exceptions in the code as well, IllegalArgumentException, ResourceNotFoundException and the code leaves '[]' after a result is printed to the screen. There are no test cases either. But as I said, this branch can easily clean this up and just needs a little more time to do so.
Click for Report

Wednesday, November 4, 2009

Elua Branch Experience

Kevin and I have been working on the Elua branch for about a week now everyday. It has been a wonderful learning experience. The programming part of it was not difficult, but the skills in the tools we had learned previously really came into use. By using quality assurance tools and continuous integration I really feel we developed a strong command line interface for the Wattdepot project.

I think the most interesting part of this experience was working in a team. I gained more experience coordinating tasks which was especially useful when working with two interoperable methods. We both worked on methods that complemented each other and continually committed to make sure that they integrated. We also found that dividing the work up into methods and tasks was useful, but we queried each other when there was a decision to be made.

The most interesting part, which I really want to learn more about is design patterns. The only design pattern I have a lot of experience with is MVC (Model View Controller) which I use extensively in PHP and a bit in Ruby on Rails.

We finished the entire specification to my knowledge, although it seems we can always make something better. I would never call a project finished, because that simply never happens. A project is always in a continuous state, once you stop working on it, it deteriorates. I have to admit I've grown a bit attached to this project because of the work I've put into it and I hope it can be used to evolve to something greater in the future.

Download and test out our CLI for Wattdepot, I'd like to know what you think

SVN Checkout:

svn checkout http://wattdepot-cli.googlecode.com/svn/branches/elua wattdepot-cli-read-only

Monday, November 2, 2009

Hudson for Continuous Integration


I found Hudson a really interesting and useful tool for continuous integration. I was surprised how slick the interface is and how easy it is to setup. It includes AJAX feedback when you are configuring your project and lets you know if you are setting it up incorrectly.

Hudson also came in really handy because when you are making a lot of changes, and committing code every 30 minutes or so, sometimes you forget to run ant -f verify.build.xml. Hudson will send you an email within 5 minutes (the time we configured Hudson to check our SVN repository) and let you know that your build failed. If it continuously fails then the little icon for your project will turn from a sun to a lightning storm (something that makes it fun to avoid!).

When I was converting my project over to smaller classes so it was not one big monolithic class, I tried to do the conversion all in one shot without checking verify or committing to let Hudson complain. That was a mistake. I had about 100 checkstyle issues I had to go through and fix, and it took a long time. It would of been easier if I had tried with one class and edit the following classes accordingly as to not let checkstyle complain.

All in all it was a great experience and I wouldn't go without Hudson and all the tools we are using now for any other project.

Monday, October 26, 2009

Smart Grid: WattDepotCLI Elua Branch

Today is the start of a new project and it is getting me excited. My partner Kevin Chiogioji and I are creating a CLI (command line interface) client to interact with data from the Oahu, Hawaii smart grid. This will enable us to monitor energy usage and other data related to the wattdepot project.

"WattDepot is a RESTful web service that collects electricity data (such as current power utilization or cumulative power utilization) from meters and stores it in a database. The data can then be retrieved by other tools for visualization and analysis."

Once the command line is done we will be creating a web application to allow the users to monitor information about their consumption of energy in order to create awareness of how much energy is being consumed.

I found a small bug in jar.build.xml that incorrectly specified the output zip to be wattdepot-clientlib.jar where it should have been wattdepot-lib-client.jar. We also modified the dist.build.xml file as to build unique distributions in zip format to distinguish our branch from the others.

Our branch repository is located here:
WattDepot: Elua Branch.
Our distribution (although only now a bootstrap) is located here:
WattDepot: Elua Branch Distribution Download.

Sunday, October 18, 2009

Midterm Questions

Q. Explain how it is or how it is not possible for an individual like yourself to make money from FOSS (Free Open Source Software).

A. It is not only possible but many people are doing this now. We can think of two likely situations of this. First, is that a developer can be hired by a company to work on code to add features or modifications to meet the requirements of the company. Second, companies like IBM and RedHat pay their employees to improve software like the Linux Kernel so it can improve the companies service offerings.

Q. Why are comments and documentation so important in the success of your project?

A. Not putting comments or documentation will typically result in a failure of your project by going directly against what the second and third prime directives state. The first prime directive will also fail because if the user and developers can’t work on the code easily, then the system will not meet the functionality that was initially stated.

Q. Implement the foreach construct in a main method and give an example of class signature that is required for the use of iterating over that collection. The iterable type is arbitrary.

A.
class FibonacciSequence implements Iterable {
...

public Iterator iterator() {
return new FibonacciIterator();
}

public static void main(String argv[]) {
FibonacciSequence fib = new FibonacciSequence();

for (BigInteger i : fib)
System.out.println(i);
}

Q. Give an example of unitasking. Think creatively.

A. An example of unitasking would be if you had many things to do and you are able to organize those tasks into one specific task to accomplish them all. For example, if you have many classes to write, you could make a super class in order to save time from repetitive tasks.

Q. What are some benefits of Static Analysis and Dynamic Analysis, list tools used to perform these automatically and explain how the two types differ.

A. Static analysis can find bugs in binary, byte code or source files that may not be executed during dynamic analysis. Dynamic analysis, having to run the code, will only find bugs that are executed.
PMD, FindBugs and Checkstyle are all static analysis tools.
JUnit is dynamic analysis.

Q. Why is it a good idea to make sure the version of a required dependency that Ant retrieves is correct?

A. It is very important that all users be using the same dependancies in order to limit the scope of variables that could be causing bugs within a given package. If all packages are the same and many users report the same problem, it is that much easier to figure out the problem.

Q. Explain the importance of regression testing and its role in large systems.

A. Regression testing is essential if you do not want to have spaghetti code that is littered with bugs. Regression testing is especially useful when dealing with large systems because when many people are working on the same code, bugs can be introduced at anytime. Doing regression tests every build can prevent conflicting behavior introduced into the code during nightly builds.

Q. Give two scenarios where white box testing is beneficial and where blackbox testing is beneficial.

A. White box testing can be more detailed in its findings if it is done correctly all the down to the unit level. The problem with this is that white box test may not test for the right things or integrate with the system well. White box testing should be employed in a fashion that exhibits well thought out and reckless behavior in order to break the system.
Black box testing should be performed to see if all the components fit together or in a situation where you want to feed arbitrary data into a complete running system to see how it behaves. The system should be tested with a fuzzer in order to find bugs in software that does not handle the input correctly due to improper validation.

Q. Give some problems that version control set out to address.

A.
The double maintenance problem (having multiple copies to maintain)

The shared data problem (multiple developers need to access one file at same time)

The simultaneous update problem (a way to prevent one file by being updated at same time by multiple parties).

Q. Why is Git so beneficial for large open source projects?

A. Git allows everyone to work faster and have copies of the whole repository, instead of relying on bandwidth from a centralized location. Having the complete history one command away is power to the developer.

Sunday, October 11, 2009

Robocode is Now Available Through SVN

I have posted flankbot on Google Code. All you need to do in order to play with flankbot is install ant and read the instructions on my Google Code site. The UserGuide wiki will give you instructions on how to install flankbot using SVN. I also added a Google Discussion group for flankbot. There are some issues to be resolved with receiving automatic svn updates to the discussion group so I suggest following the latest releases with the RSS or the ATOM feed located on the flankbot Google Code site until this issue is resolved. Currently they are only taking manual requests for automated updates.

I really enjoy working with subversion and I am excited about working in a group with this tool. I have worked on another project using Google Code called tweetx, so this was nothing new, but nevertheless, Googles simple applications make life more enjoyable.


Wednesday, October 7, 2009

Quality Assurance Testing

Creating behavioral, acceptance and unit tests is time consuming and not a whole lot of fun, but it does pay off. I created six tests for my Robocode robot FlankBot to create somewhat of a benchmark. This will be useful to me when I start changing the code for my robot. Reason being, if I make changes and it does not pass these benchmarks, then I know that I must of done something wrong and I should then go back and investigate it.

The test I created test the following:

Test if FlankBot flanks most of the time.
Test if FlankBot has good survivability techniques.
Test if FlankBot can continually beat Walls (a hard bot to beat).
Test if FlankBot can continually beat Corners.
Test if FlankBot can avoid being rammed into.
Test if FlankBot has good shooting accuracy skills against Walls.

I have to say the most time consuming part of this was getting Eclipse IDE to work with Ant. There really has to be a better way to make them work together smoothly. Importing projects and getting JUnit to work as well seems a lot more hassle then it needs to be. At this point I am missing Make and VIM. I will continue looking for a better way to improve these oddities and various problems with importing other peoples projects as well.

Most of the tests I was able to perform were centered around behavioral tests. Unit tests seemed somewhat counter intuitive to the way my bot was designed, but if you can think of a way to make it work after looking at FlankBot, I'd love to hear it. If I redesigned my bot to have explicit movement methods, it would of worked better for the unit tests. All in all, it was a good learning experience, but took a little more time then I thought it would take, as there is more building and test code to work with then there is actual robot code.

Last but not least, I'd like to mention the tool EMMA. It is a code coverage tool, which does exactly what it says. It checks how much code your JUnit tests covered. I found that my code coverage is not as high as I hoped for. This is most likely because I have a few methods that are not fully implemented yet and have yet to be used.

The lessons I learned the last few days with build code will have a great effect on larger projects and alleviate a lot of the frustrations that come with it, there is no doubt about that. I am looking forward to seeing how this scales and incorporating more tools to my reserve.

Download the latest version of FlankBot with Tests here

Tuesday, September 29, 2009

Automated Quality Assurance and Robocode Ant Build

Ant and Ivy are a great combination to automate your build for distribution. Think of Ant as what Make is to C/C++, and think of Ivy as what FreeBSD Ports is to automated dependency resolution. The reason why Ant and Ivy are so popular is because of its simplicity and flexibility. Instead of downloading all packages manually and configuring the software, Ant and Ivy do this automatically which helps in accomplishing the Three Prime Directives of Open Source Software.

Quality Assurance is not an entirely fun task, so I used PMD, CheckStyle and FindBugs to aid in this process. Here is a quick overview of what these tools can help you accomplish:

PMD scans Java source code and looks for potential problems like:

  • Possible bugs - empty try/catch/finally/switch statements
  • Dead code - unused local variables, parameters and private methods
  • Suboptimal code - wasteful String/StringBuffer usage
  • Overcomplicated expressions - unnecessary if statements, for loops that could be while loops
  • Duplicate code - copied/pasted code means copied/pasted bug

CheckStyle also scans Java source code and looks for style violations among various others.

FindBugs checks Java bytecode for known bugs. This static analysis tool can spot null pointers, problems with equals() and hashcode() implementations and many other issues that may have been overlooked.

While these tools should not replace a person doing line by line code analysis and review, it does help automate catching the low hanging fruit.

Along with creating a fully automated build that has been tested with junit test cases ready for you to download and run instantly, I have also improved FlankBot, my Robocode battle bot, to be more intelligent against Walls based robots. Some of the improvements are:

Conserving Energy (only shooting when close, only shoots if miss is below n)
Detection of Walls Bots ( 95% winning against sample Walls)

Download Flankbot with Robocode here.

Sunday, September 20, 2009

FlankBot Crushes the Opponents!

I recently finished developing FlankBot, a robot that battles other robots in RoboCode http://robocode.sourceforge.net. The purpose of this bot was explained in my previous blog entry and my design paid off in the end, consistently beating all sample bots.

Here is a video clip of FlankBot in action:




FlankBot has a defensive strategy that is used to passively attack its opponents while maintaining distance. When FlankBot scans an enemy, it moves to its side and fires. Once it stops, it aims at the enemy and fires, the strength of its shot depends on the distance from the target. Since the distance of the target depends on its ability to reliably hit the target, FlankBot shoots at minimal power if the target is far away.

This strategy paid off well as you can see from the following statistics. Each set of data is taken from an average of 100 rounds.


Rank

Robot Name

Total Score

Survival

Surv Bonus

Bullet Dmg

Bullet Bonus

Ram Dmg * 2

Ram Bonus


1st

rlb.FlankBot

5419 (81%)

4400

880

114

18

7

0

2nd

Walls

1287 (19%)

600

120

502

57

8

0



1st

rlb.FlankBot

11198 (81%)

4900

980

4335

853

110

19

2nd

Crazy

2561 (19%)

100

20

2192

20

229

0



1st

rlb.FlankBot

19059 (72%)

4500

900

11353

2111

196

0

2nd

Fire

7439 (28%)

500

100

6537

262

40

0



1st

rlb.FlankBot

15911 (79%)

4750

950

8571

1625

14

0

2nd

Corners

4166 (21%)

250

50

3726

135

5

0



1st

rlb.FlankBot

10804 (55%)

3450

690

5730

811

102

21

2nd

SpinBot

8817 (45%)

1550

310

5822

510

506

118



1st

rlb.FlankBot

17527 (88%)

4800

960

9810

1920

37

0

2nd

Tracker

2497 (12%)

200

40

2135

70

13

39



1st

rlb.FlankBot

18101 (100%)

5000

1000

9910

1980

181

30

2nd

SittingDuck

0 (0%)

0

0

0

0

0

0



1st

rlb.FlankBot

14171 (51%)

2800

560

9442

1219

115

36

2nd

RamFire

13353 (49%)

2200

440

6978

162

2221

1352



I learned that it is quite difficult to develop a strategy that consistently defeats other robots. It's best to think about developing a defensive strategy and maintaining a little bit of distance, as many other bots will eventually die from spending too much energy.

The next robot I develop, which I think will be a supercharged version of FlankBot, will employ a more aggressive approach if it can be done right. I'd like to get behind the enemy and incorporate a RamFire-like behavior. I'd also like to have a few different modes. For example, if my energy is consistently decreasing, I'd like FlankBot to try something new to outflank the enemy, or just become more defensive.

If you'd like to try out FlankBot you can download it here:
Download FlankBot (includes source)