Dani recently purchased a little tiger. She's 6 months old and is named Juno. Pretty much the cutest thing ever. I'm looking forward to living with a pet. :) She's very affectionate and likes exploring our apartment. I promise I won't turn into one of those crazy people that only talks (or blogs) about their pets. Probably.
In other news, I am finally watching the original Doctor Who episodes. Turns out I over estimated video quality in 1963. It's pretty much just static. :P The episodes themselves aren't so hot either, but I think they'll get much better soon.
Thursday, December 29, 2011
Saturday, December 24, 2011
A Term In Review
After a 4 month hiatus, I think it's time to start blogging again. I've had a great term working for RL Solutions, a company that makes risk and feedback management software for hospitals. It is inevitable that healthcare practitioners will make some mistakes. These incidents can range from very serious(think: adverse drug reactions), or very mild (patient left without checking out). Several studies report around 1 000 000 injuries a year, with anywhere from 45 000 to 90 000 deaths from medical errors (more info here and here). The idea is that health care institutions use software to log their errors, and run reports to learn and find ways to improve patient safety. For example, one study found that there were roughly 10% more medical incidents in the month of July. This so-called July effect seemed to be caused by new hospital staff starting in July.
The specific stuff I worked on was several .NET applications that take this incident information in our system, convert them to something similar to the HL7 CDA, and then send them securely to Patient Safety Organizations(PSOs). These PSOs then do some more powerful data mining and analysis on this data and give more detailed reports to the senders. There's also some more boring legal reasons why hospitals might want to send data to PSOs, but I won't talk about them now (Short story: incidents reported to PSOs can't be used against the hospitals in law).
This term, I had the privilege of reading through more massive spec documents that make little sense. I really wish they were better written. :/ Another new thing this term was using XSLT to do data conversions, and practice writing very thorough automated tests. The term itself was really fun, since this is the first time I got to work with a bunch of co-ops in a "co-op pen". The environment is also mega relaxed and fun, with PMs handing out beers once a week for no reason during work. :) There was also a David's Tea close by where everyone got to know us very well. :P All in all, it was a very fun and educational term.
Other notable things this term include me finishing Doctor Who. It's easily become my favorite show, and I'm disappointed that it took me this long to watch it. I'm currently downloading all the old episodes to see how comparatively awful they are. :P I also finally watched all of Arrested Development. I should have checked out that show much earlier too.
Next term should also be quite busy. I have 4 courses this semester (Testing, Requirements, Security :) , and DB Implementations), as well as part time development work for Karos Health and maybe even REAP too. Hopefully I'm not too busy with all that. :)
The specific stuff I worked on was several .NET applications that take this incident information in our system, convert them to something similar to the HL7 CDA, and then send them securely to Patient Safety Organizations(PSOs). These PSOs then do some more powerful data mining and analysis on this data and give more detailed reports to the senders. There's also some more boring legal reasons why hospitals might want to send data to PSOs, but I won't talk about them now (Short story: incidents reported to PSOs can't be used against the hospitals in law).
This term, I had the privilege of reading through more massive spec documents that make little sense. I really wish they were better written. :/ Another new thing this term was using XSLT to do data conversions, and practice writing very thorough automated tests. The term itself was really fun, since this is the first time I got to work with a bunch of co-ops in a "co-op pen". The environment is also mega relaxed and fun, with PMs handing out beers once a week for no reason during work. :) There was also a David's Tea close by where everyone got to know us very well. :P All in all, it was a very fun and educational term.
Other notable things this term include me finishing Doctor Who. It's easily become my favorite show, and I'm disappointed that it took me this long to watch it. I'm currently downloading all the old episodes to see how comparatively awful they are. :P I also finally watched all of Arrested Development. I should have checked out that show much earlier too.
Next term should also be quite busy. I have 4 courses this semester (Testing, Requirements, Security :) , and DB Implementations), as well as part time development work for Karos Health and maybe even REAP too. Hopefully I'm not too busy with all that. :)
Monday, September 12, 2011
First Day
It's been a while between updates. Frosh week kept me very busy, but mostly I've been lazy. I'll be better this semester, promise.
Today was my first day at RL Solutions. It was quite exciting. Basically the company makes a suite of software that prevents medical errors. I am still learning how everything works.
I am a little surprised how complicated some of the features are. For example, if a patient has some accident, say they fall inside the hospital for whatever reason, the hospital staff fill out a form. The form is very long and requires a lot of information (who, when, how, what happened, what happened after, who did you contact, etc...). It takes like 20 minutes to go through, and that doesn't include any insurance information (which is notoriously more complicated). This makes UI design an important priority for the software, since there is a lot of room for speeding up this data entry. Should be an interesting project. In general, the software's UI is pretty well built. It's a pleasure to use. :)
Development will be web-based .NET, something I've never done, so I'm excited to learn more about it.
My coworkers are all very nice. It should be a fun semester. There is a pool table. I look forward to using it. :P
Should be another great semester. :)
Today was my first day at RL Solutions. It was quite exciting. Basically the company makes a suite of software that prevents medical errors. I am still learning how everything works.
I am a little surprised how complicated some of the features are. For example, if a patient has some accident, say they fall inside the hospital for whatever reason, the hospital staff fill out a form. The form is very long and requires a lot of information (who, when, how, what happened, what happened after, who did you contact, etc...). It takes like 20 minutes to go through, and that doesn't include any insurance information (which is notoriously more complicated). This makes UI design an important priority for the software, since there is a lot of room for speeding up this data entry. Should be an interesting project. In general, the software's UI is pretty well built. It's a pleasure to use. :)
Development will be web-based .NET, something I've never done, so I'm excited to learn more about it.
My coworkers are all very nice. It should be a fun semester. There is a pool table. I look forward to using it. :P
Should be another great semester. :)
Friday, August 19, 2011
American Health Care Cost Inforgraphs
I found a great info graph talking about healthcare costs in America. Check it out:
Via: Medical Billing And Coding
Via: Medical Billing And Coding
I thought the expensive outpatient care reason was odd. From what I understand, outpatient care should be less expensive than inpatient care. I still have a lot to learn about the healthcare fields, I guess. Or maybe America is derping hard.
Via: Medical Billing And Coding
Via: Medical Billing And Coding
I thought the expensive outpatient care reason was odd. From what I understand, outpatient care should be less expensive than inpatient care. I still have a lot to learn about the healthcare fields, I guess. Or maybe America is derping hard.
Thursday, August 18, 2011
Stanford CS courses!
Stanford University is offering a few courses next semester for free online. Here are the classes:
- Machine Learning
- Artificial Intelligence
- Databases
I've enrolled in the Machine Learning and AI courses for this Fall. Classes start on October 10th. I'm pretty excited. The courses will include lectures, assignments, and evaluations, just like any other course. It sounds very promising. I'd like to see how Stanford's education compares to Waterloo's. I will be taking AI in my final semester at Waterloo, so I'll be able to compare those two classes directly. Although I might not want to take Waterloo's AI course if I'm going to learn the material from this online course... I guess we'll see. You should consider registering in some of these courses if you aren't too busy next Fall. :)
In other news, I finished exams well and I'm enjoying a few weeks of relaxing before Frosh Week hits. After that, I start work at RL Solutions on September 12th. I'm also excited to start working there, as well as go back to the Microsoft development stack, that I love so much. :)
- Machine Learning
- Artificial Intelligence
- Databases
I've enrolled in the Machine Learning and AI courses for this Fall. Classes start on October 10th. I'm pretty excited. The courses will include lectures, assignments, and evaluations, just like any other course. It sounds very promising. I'd like to see how Stanford's education compares to Waterloo's. I will be taking AI in my final semester at Waterloo, so I'll be able to compare those two classes directly. Although I might not want to take Waterloo's AI course if I'm going to learn the material from this online course... I guess we'll see. You should consider registering in some of these courses if you aren't too busy next Fall. :)
In other news, I finished exams well and I'm enjoying a few weeks of relaxing before Frosh Week hits. After that, I start work at RL Solutions on September 12th. I'm also excited to start working there, as well as go back to the Microsoft development stack, that I love so much. :)
Labels:
AI,
Computer Science,
Education,
Machine Learning
Monday, August 8, 2011
Software Engineering All-Star Topics: Redundancy
There are a few very prominent topics in all fields. I think redundancy is one of the biggest ones in the field of software engineering. Redundancy in the context of software means having duplicated services or data.Why would you want to do this? Well there are many reasons.
First, redundancy is a very powerful way of creating fault tolerant applications. If there are two identical copies of some services, it's okay if one temporarily goes down. While this may seem like a rather inelegant (and expensive) solution, it works extraordinarily well. Scared about your web service going down? Make two of them. Or N of them. Worried about data corruption? Replicate it on separate hard drives (on potentially separate machines).
Got scalability problems? The solution might be to use redundancy to implement load balancing. This is commonly done to implement horizontal scaling.
Redundancy can also be used to solve a huge subset of performance problems through caching. Caching is just a form of data redundancy. In practice, caching is one of the biggest reasons computers are so fast today. The internet has many great examples of this. Your browser caches web pages to achieve huge performance boosts. Want to see the difference? Check out StumbleUpon. Start stumbling and notice how slow it is compared to refreshing your Facebook page. That's because the data you are accessing needs to be accessed from a web server, instead of (mostly) from your browser's local cache. DNS records are also cached by many machines on their way to you. Without this caching, loading every single page on the internet would take ~200ms longer to load, simply because DNS would have to redo all name resolution queries every time. File caching on your OS is another good example of this. Without system file caching, your OS would also run noticeably (and painfully) slower. Caching is responsible for some of the biggest performance leaps we've seen in computers. Interestingly enough, caching is usually implemented through hash tables, which is a computer science all-star topic.
This practice is not unique to software engineering. Redundancy has been used in most other engineering disciplines to establish fault tolerance for years. For example, Boeing 747s are equipped with 4 engines, but are designed to run with just 3.
Guess something useful came out of that Distributed Systems class after all. Want exam to be over though. :(
First, redundancy is a very powerful way of creating fault tolerant applications. If there are two identical copies of some services, it's okay if one temporarily goes down. While this may seem like a rather inelegant (and expensive) solution, it works extraordinarily well. Scared about your web service going down? Make two of them. Or N of them. Worried about data corruption? Replicate it on separate hard drives (on potentially separate machines).
Got scalability problems? The solution might be to use redundancy to implement load balancing. This is commonly done to implement horizontal scaling.
Redundancy can also be used to solve a huge subset of performance problems through caching. Caching is just a form of data redundancy. In practice, caching is one of the biggest reasons computers are so fast today. The internet has many great examples of this. Your browser caches web pages to achieve huge performance boosts. Want to see the difference? Check out StumbleUpon. Start stumbling and notice how slow it is compared to refreshing your Facebook page. That's because the data you are accessing needs to be accessed from a web server, instead of (mostly) from your browser's local cache. DNS records are also cached by many machines on their way to you. Without this caching, loading every single page on the internet would take ~200ms longer to load, simply because DNS would have to redo all name resolution queries every time. File caching on your OS is another good example of this. Without system file caching, your OS would also run noticeably (and painfully) slower. Caching is responsible for some of the biggest performance leaps we've seen in computers. Interestingly enough, caching is usually implemented through hash tables, which is a computer science all-star topic.
This practice is not unique to software engineering. Redundancy has been used in most other engineering disciplines to establish fault tolerance for years. For example, Boeing 747s are equipped with 4 engines, but are designed to run with just 3.
Guess something useful came out of that Distributed Systems class after all. Want exam to be over though. :(
Sunday, July 31, 2011
How to write unmaintainable code
Here's a fun read on how to write unmaintainability code. For job security, of course. :)
This is probably the best thing that came out of my Architecture class. >_<
This is probably the best thing that came out of my Architecture class. >_<
Wednesday, July 27, 2011
REAP Review
The client presentation for REAP were yesterday. The various teams presented their ideas to the REAP exec team, as well as to some stakeholders that might be using the products of our research. All the presentations that I saw were very interesting. I'm looking forward to seeing what that subsequent REAP teams do with the progress made so far.
Our presentation on the Mixed Reality Interface (MRI) went fairly smoothly(Note to self: avoid making last minute demo changes. >_<). We talked about our plans of using the MRI table to create virtual museum exhibits to enhance user go-ers experiences in museums. Because of the playful and tactile nature of the table, kids would be quite attracted to this sort of exhibit. We are currently working with the Earth Sciences museum on campus to create a mining exhibit as a proof of concept. The museum is currently converting a hallway in the building to be a mini mining exhibit. We hope to be able to get a virtually enhanced exhibit to go along with the physical one by around the end of October. One of the subsequent REAP teams will be putting in a lot of game design effort into making this project happen.
After the presentations, we all went to celebrate with lunch at the University Club. I always wondered what was in that building, and now I know. :P Weee!
In general, REAP was a great opportunity. We got to meet some great people in the digital projection industry, as well as work with some really bright people. We also got a chance to meet with people from all sorts of industries, like museum curators and home designers. The REAP members also got to play with all sorts of cool technologies. Other than the MRI table, we got to play with Microtiles, Unity, and Sketch Up, all while getting paid. To top things off, we also got a lot of training throughout the semester, including a few sessions on Agile project management. :) As far as part time jobs go, this was a very rewarding one. :)
If' you're interested in joining REAP in a future term, you can apply on the REAP site, but I should mention that hiring for the September term is finished. They still might need people for on-demand work (especially people with game design or game development background). If you are interested in one of those positions, you can email REAP or myself. :)
Our presentation on the Mixed Reality Interface (MRI) went fairly smoothly(Note to self: avoid making last minute demo changes. >_<). We talked about our plans of using the MRI table to create virtual museum exhibits to enhance user go-ers experiences in museums. Because of the playful and tactile nature of the table, kids would be quite attracted to this sort of exhibit. We are currently working with the Earth Sciences museum on campus to create a mining exhibit as a proof of concept. The museum is currently converting a hallway in the building to be a mini mining exhibit. We hope to be able to get a virtually enhanced exhibit to go along with the physical one by around the end of October. One of the subsequent REAP teams will be putting in a lot of game design effort into making this project happen.
After the presentations, we all went to celebrate with lunch at the University Club. I always wondered what was in that building, and now I know. :P Weee!
In general, REAP was a great opportunity. We got to meet some great people in the digital projection industry, as well as work with some really bright people. We also got a chance to meet with people from all sorts of industries, like museum curators and home designers. The REAP members also got to play with all sorts of cool technologies. Other than the MRI table, we got to play with Microtiles, Unity, and Sketch Up, all while getting paid. To top things off, we also got a lot of training throughout the semester, including a few sessions on Agile project management. :) As far as part time jobs go, this was a very rewarding one. :)
If' you're interested in joining REAP in a future term, you can apply on the REAP site, but I should mention that hiring for the September term is finished. They still might need people for on-demand work (especially people with game design or game development background). If you are interested in one of those positions, you can email REAP or myself. :)
Wednesday, July 20, 2011
Car Futures
The most productive thing I've done this summer is plan out my car owning future.
Currently, 1999 Chrysler Intrepid, Black (value < $100 at this point)
Sadly, this car is almost dead. Thankfully, my parents are replacing their red 1999 Chrysler Intrepid soon, and are planning to give it to me. :) It has about half the kilometers and is in much better shape (value ~$500)
After I drive this car to death, it'll be time for my first real car purchase.
Jaguar XF (value ~$60 000)
I think I will feel obligated to to take up golf as a hobby at this point.
Then I'll upgrade to a Jaguar XK (value ~$100 000)
The red brake discs pictured above will be replaced (and burned >_<).
Finally, the holy grail of my car journey, Aston Martin DB9 (~ $200 000)
Yay!
This might be a little ambitious. I feel like I might need a reasonably priced sedan between the Red intrepid and the XF. Not sure what that might be yet. :/
This list will also probably change very soon. Specifically, the next time I watch Top Gear.
Currently, 1999 Chrysler Intrepid, Black (value < $100 at this point)
Sadly, this car is almost dead. Thankfully, my parents are replacing their red 1999 Chrysler Intrepid soon, and are planning to give it to me. :) It has about half the kilometers and is in much better shape (value ~$500)
After I drive this car to death, it'll be time for my first real car purchase.
Jaguar XF (value ~$60 000)
I think I will feel obligated to to take up golf as a hobby at this point.
Then I'll upgrade to a Jaguar XK (value ~$100 000)
The red brake discs pictured above will be replaced (and burned >_<).
Finally, the holy grail of my car journey, Aston Martin DB9 (~ $200 000)
Yay!
This might be a little ambitious. I feel like I might need a reasonably priced sedan between the Red intrepid and the XF. Not sure what that might be yet. :/
This list will also probably change very soon. Specifically, the next time I watch Top Gear.
Monday, July 11, 2011
Spring Terms and Unity
With two weeks of classes left, I've decided that Spring school terms are a bad idea. I don't feel very academic during Spring terms. All my other Spring terms have be work terms, and I really enjoyed those, but school terms are different. I have to constantly be thinking about what I have to do for my other classes. I'm just not in the mood for it. I just want to sleep in and watch TV (currently, Top Gear and Dr. Who). That doesn't help that 8:30am class. :P Thankfully, I have only three courses this semester, one of which is very interesting. Unfortunately, the others are pretty disappointing. One more assignment rush, then exams, then a few weeks of real summer before I start work in the Fall. Thankfully, this is my last Spring school term.
On another note, I got a chance to play with Unity over the weekend. Unity is a 3D game engine with a powerful editor that minimizes the amount of code you need to write to get something to work. We will be using Unity during the final few weeks of REAP, as we try to create a demo of a museum exhibit on mining. I'm really glad that I got a chance to get paid to learn Unity. :P
My first impressions is that Unity is very powerful and simple to use. You can get a remarkable amount done without knowing how to program. Scripting is very important, but a lot of it is already done for you. For example, you can just drag-and-drop a collider mesh onto an object, and it instantly inherits collision physics. It's a very powerful tool. I'm looking forward to using it in the next few weeks. :)
Thursday, July 7, 2011
The Human Aspect Of Software Engineering
As a computer scientist/software engineer, it's easy to forget about the human aspect of what we do. We are often so immersed in very technical parts of the software that we forget that everything we do is for a human. If we don't keep that human in mind, the product really suffers. No matter how technologically innovative a piece of software might be, if there isn't a real, useful human connection, the software will ultimately fail. In that sense, considering the human aspect is the most important aspect to consider when writing software.
Modern development treads seem to be making steps to consider end users more during the development process. For example, agile development stresses getting early involvement from users, to ensure that the human aspect of software is always addressed. They also encourage frequent updates and demos to customers to ensure that they are always satisfied by the product.
I suspect that a lot of usability issues stem from not considering the squishy thing between the chair and monitor. Most of user interface work seems to be figuring out the best way to create that connection between the cool techy thing the developers did and the human using it.
It's easy to forget that most people are not very technologically savvy. You'd be surprised at the amount of people who don't know that you can right click. I think it's really cool that Interpolation search is O(log(log(n)).. Most people, however, don't care about this at all. They do care about reducing their search time in your software though.
It's important to always keep this human aspect of software engineering in the back of your head at all times. It can really improve the software you produce.
Modern development treads seem to be making steps to consider end users more during the development process. For example, agile development stresses getting early involvement from users, to ensure that the human aspect of software is always addressed. They also encourage frequent updates and demos to customers to ensure that they are always satisfied by the product.
I suspect that a lot of usability issues stem from not considering the squishy thing between the chair and monitor. Most of user interface work seems to be figuring out the best way to create that connection between the cool techy thing the developers did and the human using it.
It's easy to forget that most people are not very technologically savvy. You'd be surprised at the amount of people who don't know that you can right click. I think it's really cool that Interpolation search is O(log(log(n)).. Most people, however, don't care about this at all. They do care about reducing their search time in your software though.
It's important to always keep this human aspect of software engineering in the back of your head at all times. It can really improve the software you produce.
Tuesday, July 5, 2011
Character Encoding Fun!
Let's talk about character encoding. This seems to be a common blank area of knowledge for a lot of developers.
Joel Spolsky found this to be true, so he wrote this great article about character encoding and Unicode. I really recommend that you give it a read. It's a little old (2003), but still completely relevant.
If you are feeling too lazy to read his summary (I blame summer), you can read my even shorter summary.
1) There is no such thing as "plain text strings". You should not assume any given string is in ASCII. You, in fact, have no idea what the string means until you know how it's encoded.
2) Unicode is a character set that to hopes include characters from almost all languages. Unicode is not an encoding though. Older character sets, like ASCII, mapped characters ('A') to numbers (65), which got encoded as the binary representation of that number. Unicode maps characters to something called code points. These code points look something like U+0065. These code points are then encoded using some encoding system. There are many ways to do this encoding, but perhaps the most common is UTF-8.
3) Unicode is not always encoded as 2 bytes. UTF-16 is a specific encoding that encodes Unicode in (at most) 2 bytes. This is not true in general. For example, UTF-8 can be up to 4-bytes long, and UTF-32 is always 4-bytes.
4) UTF-8 is backwards compatible with ASCII for the first 8 bits. This means that UTF-8 is backwards compatible with ASCII.
5) Code points can be encoded any many ways. You can even encode Unicode code points using old-school ASCII encoding. What happens to code points that ASCII encoding doesn't define? They show up as ?. If you've ever seen international data that appears as ????????, it means that the encoding they are using doesn't support those code points.
I hope this fills in some of these character set and encoding knowledge holes. :) Now I should probably do one of those assignments I have due this week (>_<). School terms in the summer suck.
Joel Spolsky found this to be true, so he wrote this great article about character encoding and Unicode. I really recommend that you give it a read. It's a little old (2003), but still completely relevant.
If you are feeling too lazy to read his summary (I blame summer), you can read my even shorter summary.
1) There is no such thing as "plain text strings". You should not assume any given string is in ASCII. You, in fact, have no idea what the string means until you know how it's encoded.
2) Unicode is a character set that to hopes include characters from almost all languages. Unicode is not an encoding though. Older character sets, like ASCII, mapped characters ('A') to numbers (65), which got encoded as the binary representation of that number. Unicode maps characters to something called code points. These code points look something like U+0065. These code points are then encoded using some encoding system. There are many ways to do this encoding, but perhaps the most common is UTF-8.
3) Unicode is not always encoded as 2 bytes. UTF-16 is a specific encoding that encodes Unicode in (at most) 2 bytes. This is not true in general. For example, UTF-8 can be up to 4-bytes long, and UTF-32 is always 4-bytes.
4) UTF-8 is backwards compatible with ASCII for the first 8 bits. This means that UTF-8 is backwards compatible with ASCII.
5) Code points can be encoded any many ways. You can even encode Unicode code points using old-school ASCII encoding. What happens to code points that ASCII encoding doesn't define? They show up as ?. If you've ever seen international data that appears as ????????, it means that the encoding they are using doesn't support those code points.
I hope this fills in some of these character set and encoding knowledge holes. :) Now I should probably do one of those assignments I have due this week (>_<). School terms in the summer suck.
Monday, July 4, 2011
Google+ (and -)
Google's attempt at the social market, Google+, came out the other day. It's an interesting application.
The first thing that strikes me is the UI. I think Google+ has a fantastic user interface. It's simple, clear, and easy to learn. One thing that I really like is Google's attention to details in their user interfaces. Whenever you click "+1", there's a little animation of the number rolling up. If you delete a "circle", there's a little animation of it rolling away off the screen. These little things contribute to a great user interface.
Compared to the Facebook UI, Google+'s UI is a breath of fresh air. However, Google+ only has a tiny (really tiny) subset of Facebook's features. This probably contributes heavily to Google+'s simple UI. I suspect that when (if?) Google+ gets all the features that Facebook has, the user interface will become a lot more cluttered. With that said, it's not hard to beat Facebook's user interface.
This sort of leads me to one Google+'s biggest drawbacks. They really offer a very limited subset of Facebook's features. There are no events, messages, chat(EDIT: Just kidding. They have chat), or even "wall-to-wall" posts. An application API is also missing (Farmville+!). Granted Google+ is still at a very early stage, so it might get a lot of those features later.
The other big drawback is userbase. It is very hard to have a successful social networking application without a lot of users. People won't switch to Google+ until their friends switch. Of course, their friends are thinking the same thing. I think Google can overcome this problem fairly easily though. Perhaps we'll see migration tools that let you quickly populate your Google+ account using your Facebook data.
There are a few neat features in Google+. The one that impresses me the most is the idea of Circles. With Circles, Google+ lets you place your "friends" into various groups. Then you can choose which groups, or circles, can see what content. This is a nice way to keep your family from seeing your status updates about drinking and partying.
Another benefit is that Google has a much more sensible TOS than Facebook. They also have a better history of protecting things like privacy. I know for a lot of people, this is a very big deal. I personally don't care too much about this one. When you put things on the internet (especially on a social networking site), you always risk that everyone might be able to read it. This is why I never post things like my phone number on Facebook (even if its just for "Friends"). The only information I have on Facebook is information that I would feel comfortable telling strangers.
A huge problem I've had with Facebook is their rollout strategy. They seem to be fans of release early, release often, but they suck at it. It is almost a weekly occurrence when a major piece of functionality is broken. Facebook doesn't take enough time to do regression testing before they push updates and it really bugs me. Just because you can fix it fast doesn't mean you can ship it in a broken state. >_< I've found Google to be much better in this area. They also progressively add to their software, but it isn't crippled every week by stupid release strategies.
I'll keep an eye on Google+ going forward, but they have a lot to do before they can realistically hope to beat out Facebook.
Friday, July 1, 2011
Java is Always Pass-By-Value
This is probably the biggest common misconception in Java. It's starting to become a minor pet-peeve of mine. :P People say things like "Java is pass-by-value for primitives, but pass-by-reference for Objects.". This is not true.
In fact, Java always uses pass-by-value. The trick is that Java always stores references to Objects. When you pass in an object to a method, the object reference is passed by value. This is different than pass-by-reference. Java makes a copy of the reference variable and that's what the method uses. While a lot of the time you won't be able to tell the difference, there are some important cases where this makes a difference.
Here's an example:
The output of this program is:
a: 5
b: 10
This is unexpected behaviour if you think that Java is really pass-by-reference. What this code really did was swap two copies of references, not the references themselves. This caused me a few headaches in the past.
This is a misconception has been around for way too long. Spread the word. :P
In fact, Java always uses pass-by-value. The trick is that Java always stores references to Objects. When you pass in an object to a method, the object reference is passed by value. This is different than pass-by-reference. Java makes a copy of the reference variable and that's what the method uses. While a lot of the time you won't be able to tell the difference, there are some important cases where this makes a difference.
Here's an example:
The output of this program is:
a: 5
b: 10
This is unexpected behaviour if you think that Java is really pass-by-reference. What this code really did was swap two copies of references, not the references themselves. This caused me a few headaches in the past.
This is a misconception has been around for way too long. Spread the word. :P
Monday, June 27, 2011
Commenting: The Lazy Way Out
I'm a big believer in self documenting code. That is, code that is structured to be readable without comments. There are a lot of problems with comments. First, they are notorious for getting out of date. If you've ever been bitten by a misleading comment, you will know that no comment is much better than a false one. I see most comments as crutches. You have this bad code, and you try to "fix" it by just adding comments to the code, since that's the easiest way to make the whole package somehow understandable. Unfortunately, at the end of the day, the code is still awful. In this way, comments are the lazy way to make code readable. In fact, most of the time I treat comments as a potential code smell. It is almost always better to refactor the code to be more clear, instead of annotating the code.
I've heard other developers say things like self-documenting code is a lazy excuse for not adding comments. I disagree. Writing self-documenting code is orders of magnitudes harder than writing descriptive comments. It also requires a lot more time and effort than just commenting your code. However, it is also much more effective at making code readable. When your code only makes sense in the presence of comments, you are making that code much harder to use in other areas. Are you going to include the comments wherever the bad code is used? Copy-pasta?
There are, however, a few cases where comments are the way to go. They are much easier and quicker to write than actually refactoring the code. This makes them preferable when you have to write code under a very tight deadline. However, I would treat them like any other "hack" developers do in the heat of a release; do it now, and fix it as soon as possible when the deadlines loosen up.
There are also some times where refactoring the code will lead to a lot more code for little readability benefit. In these cases, a comment might be a better solution. Having too much code, however clean, is also a very big problem, because it makes the overall project harder to understand. However, to me this seems like a rare case. It is almost always better to refactor than to add a comment.
As an exercise, take a look at some old code you wrote and find the lines of code with comments. Can you think of a way to refactor it to be cleaner? I think in 90% of those cases, you will be able to refactor the code to make it much more readable without comments.
I've heard other developers say things like self-documenting code is a lazy excuse for not adding comments. I disagree. Writing self-documenting code is orders of magnitudes harder than writing descriptive comments. It also requires a lot more time and effort than just commenting your code. However, it is also much more effective at making code readable. When your code only makes sense in the presence of comments, you are making that code much harder to use in other areas. Are you going to include the comments wherever the bad code is used? Copy-pasta?
There are, however, a few cases where comments are the way to go. They are much easier and quicker to write than actually refactoring the code. This makes them preferable when you have to write code under a very tight deadline. However, I would treat them like any other "hack" developers do in the heat of a release; do it now, and fix it as soon as possible when the deadlines loosen up.
There are also some times where refactoring the code will lead to a lot more code for little readability benefit. In these cases, a comment might be a better solution. Having too much code, however clean, is also a very big problem, because it makes the overall project harder to understand. However, to me this seems like a rare case. It is almost always better to refactor than to add a comment.
As an exercise, take a look at some old code you wrote and find the lines of code with comments. Can you think of a way to refactor it to be cleaner? I think in 90% of those cases, you will be able to refactor the code to make it much more readable without comments.
Friday, June 24, 2011
Why Agile Development is More Fun
I just read this article claiming that Agile is "boring". I'm not sure how this person got to that conclusion. He also claims that Agile is very rigid and strict, although it's probably one of the most relaxed project management methodologies out there. It's certainly more dynamic and flexible than Waterfall models are.
From the article, it seems that this person works somewhere where they don't have any concept of project management at all. He talks as if he doesn't have deadlines to meet for his organization. Not sure where he's working where he can get away with this. Almost all projects have deadlines. It's very useful for business people to know things like estimates and set deadlines. Pretending they don't exist is no way to professionally develop software. Certainly not a realistic way to grow as an organization.
The writer says that Agile development gets boring after you do it for a couple projects. Not sure where that's coming from. I find that Agile development environments are much more interesting, because there is much less repetition. From iteration to iteration, you could be working on very different projects. Agile allows (and even encourages!) developers to explore other areas of the software and cross-train. You are also much less likely to be pegged as the "Database guy" or "UI guy" in an Agile project. While you might have a lot of experience with UI, your task is really whatever the project needs. If that means moving outside of your domain, so be it.
When I worked at Karos Health we practiced Scrum, a form of Agile, and I found it to be very flexible. While most of the time I was developing UI code, I also participated in all the other sections of the applications. I got to see all the parts of the application.
Also, Agile teams are encouraged to work very closely together. This interaction creates a very interesting working environment where you are constantly learning. This is certainly more interesting than working your way down an ad-hoc todo list by yourself, conversing with other developers only when absolutely necessary.
I suspect that the author has never worked in an Agile company (or at least one that's practicing Agile correctly), because his comments seem to be the opposite of what Agile development encourages.
From the article, it seems that this person works somewhere where they don't have any concept of project management at all. He talks as if he doesn't have deadlines to meet for his organization. Not sure where he's working where he can get away with this. Almost all projects have deadlines. It's very useful for business people to know things like estimates and set deadlines. Pretending they don't exist is no way to professionally develop software. Certainly not a realistic way to grow as an organization.
The writer says that Agile development gets boring after you do it for a couple projects. Not sure where that's coming from. I find that Agile development environments are much more interesting, because there is much less repetition. From iteration to iteration, you could be working on very different projects. Agile allows (and even encourages!) developers to explore other areas of the software and cross-train. You are also much less likely to be pegged as the "Database guy" or "UI guy" in an Agile project. While you might have a lot of experience with UI, your task is really whatever the project needs. If that means moving outside of your domain, so be it.
When I worked at Karos Health we practiced Scrum, a form of Agile, and I found it to be very flexible. While most of the time I was developing UI code, I also participated in all the other sections of the applications. I got to see all the parts of the application.
Also, Agile teams are encouraged to work very closely together. This interaction creates a very interesting working environment where you are constantly learning. This is certainly more interesting than working your way down an ad-hoc todo list by yourself, conversing with other developers only when absolutely necessary.
I suspect that the author has never worked in an Agile company (or at least one that's practicing Agile correctly), because his comments seem to be the opposite of what Agile development encourages.
Monday, June 20, 2011
Improving Performance
Coding Horror updated today, talking about the importance of performance for web applications. He cites some studies that show significant drops in website usage as the speed slows down. While this is definitely important, you have to be very careful to not get carried away. Unless you're Google, you probably don't need to shave off a few milliseconds off your page load times. My basic rule of thumb is to only optimize things if it will make a noticeable difference to the performance of your applications. Humans can't detect differences of a few milliseconds.
Of course, there are always times when you do need to optimize for performance. Doing this in a smart way can save a lot of developer time. Apparently, Yahoo has a well-cited set of tips for improving site performance. Some of these are really easy to do, and have a major impact on load times. Minimizing HTTP requests is a big one. It's fairly easy merge all you JS files into one, optimized file. There are plenty of tools that do this for you automatically (like Closure Tools).
80-90% of the user's time is spent downloading "stuff" to the client. Minimizing the "stuff" is a very powerful way to improve response time. For example, take a look at http://www.google.ca/. Look at the source code. You won't see "wasteful" things like spaces and linebreaks! Granted, this is a pretty extreme example and Google probably needs it to be this optimized.
You can get another huge performance boost by using your cache better. HTTP has built in cache using Conditional Gets, but it requires websites to set response headers intelligently. Using a longer expire time can notably improve performance. In general, caching is perhaps the biggest thing keeping things running fast. If your computer didn't have a cache it would barely be able to function. If our DNS name servers didn't cache anything, the internet would crawl.
Another simple thing you can change is the order of your information on web pages. You want to put your CSS files at the very top of the HTML file. Once your browser gets this data, it can start progressively rendering the page. You also want all the scripts at the bottom, because they take a (relatively) long time to download, and might block concurrent downloads of other things.
A very powerful tool here is your profiler. Some browsers (like Chrome) have this built in. A profiler can tell you exactly where the bottlenecks in your system are. You should never optimize something before consulting your profiler. Often, you might find that the thing you were going to optimize is negligible compared to something else.
I thought these were some interesting things to know in the few cases where you need to spend time optimizing for performance.
Of course, there are always times when you do need to optimize for performance. Doing this in a smart way can save a lot of developer time. Apparently, Yahoo has a well-cited set of tips for improving site performance. Some of these are really easy to do, and have a major impact on load times. Minimizing HTTP requests is a big one. It's fairly easy merge all you JS files into one, optimized file. There are plenty of tools that do this for you automatically (like Closure Tools).
80-90% of the user's time is spent downloading "stuff" to the client. Minimizing the "stuff" is a very powerful way to improve response time. For example, take a look at http://www.google.ca/. Look at the source code. You won't see "wasteful" things like spaces and linebreaks! Granted, this is a pretty extreme example and Google probably needs it to be this optimized.
You can get another huge performance boost by using your cache better. HTTP has built in cache using Conditional Gets, but it requires websites to set response headers intelligently. Using a longer expire time can notably improve performance. In general, caching is perhaps the biggest thing keeping things running fast. If your computer didn't have a cache it would barely be able to function. If our DNS name servers didn't cache anything, the internet would crawl.
Another simple thing you can change is the order of your information on web pages. You want to put your CSS files at the very top of the HTML file. Once your browser gets this data, it can start progressively rendering the page. You also want all the scripts at the bottom, because they take a (relatively) long time to download, and might block concurrent downloads of other things.
A very powerful tool here is your profiler. Some browsers (like Chrome) have this built in. A profiler can tell you exactly where the bottlenecks in your system are. You should never optimize something before consulting your profiler. Often, you might find that the thing you were going to optimize is negligible compared to something else.
I thought these were some interesting things to know in the few cases where you need to spend time optimizing for performance.
Saturday, June 18, 2011
TDD and YAGNI
Here's an interesting read. It talks about how if you practice Test Driven Development (TDD) in an Agile environment, there is a lot of pressure to adhere to YAGNI. YAGNI stands for "you ain't gonna need it". Developers often try to predict what features the code might need in the future, and try to build it in right away.
The problem is that most of the time, you won't actually need that feature that you predicted you would. To avoid this problem, supporters of YAGNI try to write the smallest amount of code possible to accomplish something. TDD has a similar belief in that they stress writing the smallest amount of code possible to satisfy a test.
This seems like an extreme to me. While I agree that future proofing every part of your application is a silly idea, I don't think that this sort of TDD and YAGNI is very scalable. It's not really about doing the simplest thing possible, it's about the simplest thing possible that doesn't code you into a corner. You want to make reasonable assumptions about what will change, and future proof that. It's important to be pretty conservative with your guesses here, though.
A good developer will be able to draw on past experiences to predict that some things will happen. If they think it's easier to build it in now, they should. I would trust the judgement of an experienced developer over a best practice.
I feel like YAGNI is overcompensating for developers who apply design patterns to everything ever because they learned to do that in school. This is obviously the other extreme.
It's like software development practices are following a pendulum, going for extreme(Waterfall, design patterns everywhere) to extreme(Extreme Programming, YANGI). It's still a very volatile field because it's so young. Hopefully the industry will settle down somewhere in the middle.
The problem is that most of the time, you won't actually need that feature that you predicted you would. To avoid this problem, supporters of YAGNI try to write the smallest amount of code possible to accomplish something. TDD has a similar belief in that they stress writing the smallest amount of code possible to satisfy a test.
This seems like an extreme to me. While I agree that future proofing every part of your application is a silly idea, I don't think that this sort of TDD and YAGNI is very scalable. It's not really about doing the simplest thing possible, it's about the simplest thing possible that doesn't code you into a corner. You want to make reasonable assumptions about what will change, and future proof that. It's important to be pretty conservative with your guesses here, though.
A good developer will be able to draw on past experiences to predict that some things will happen. If they think it's easier to build it in now, they should. I would trust the judgement of an experienced developer over a best practice.
I feel like YAGNI is overcompensating for developers who apply design patterns to everything ever because they learned to do that in school. This is obviously the other extreme.
It's like software development practices are following a pendulum, going for extreme(Waterfall, design patterns everywhere) to extreme(Extreme Programming, YANGI). It's still a very volatile field because it's so young. Hopefully the industry will settle down somewhere in the middle.
Labels:
Agile,
Software Engineering,
Test Driven Development
Thursday, June 16, 2011
Ignite Waterloo
Yesterday, I got a chance to go to Ignite Waterloo 6, a community event where people give 5 minute talks on topics they're passionate about.
As always, it was an excellent event. It's fun to see some of the same faces at all the community events in Waterloo.
There were some notable talks. Cate Huston gave a talk entitled "Why Do Programmers Have to Lie to Get Dates?", where she claimed software developers have a communication problem that we need to address. There is a lot of confusion about what software developers actually do, and it leads to some interesting questions like "So you work for the internet?". If we figure out how to communicate better, not only will be create better software, but people will understand what we actually do. :P
Syd Bolton talked about cool uses of old computers. He is one of the founders of The Personal Computer Museum.
Bob Rushby, ex-CTO of Christie Digital, gave a talk on the how the future will be full of pixels. He imagined a world where everything analog will be replaced with something digital. Cool stuff.
Ben Brown gave an interesting talk about getting rid of all road signs. Apparently some places in Europe have done this with great success. I'm not convinced that this would work in Canada, especially in larger cities.
Steven Scott debunked the "I've got nothing to hide" argument on Privacy. Also an interesting discussion.
To finish the night, my good friend Amal Isaac gave a great talk on The Technological Singularity. He talked about an interesting future when, inevitably, computers surpass our intelligence.
All-in-all, it was another great event in the KW community! :)
As always, it was an excellent event. It's fun to see some of the same faces at all the community events in Waterloo.
There were some notable talks. Cate Huston gave a talk entitled "Why Do Programmers Have to Lie to Get Dates?", where she claimed software developers have a communication problem that we need to address. There is a lot of confusion about what software developers actually do, and it leads to some interesting questions like "So you work for the internet?". If we figure out how to communicate better, not only will be create better software, but people will understand what we actually do. :P
Syd Bolton talked about cool uses of old computers. He is one of the founders of The Personal Computer Museum.
Bob Rushby, ex-CTO of Christie Digital, gave a talk on the how the future will be full of pixels. He imagined a world where everything analog will be replaced with something digital. Cool stuff.
Ben Brown gave an interesting talk about getting rid of all road signs. Apparently some places in Europe have done this with great success. I'm not convinced that this would work in Canada, especially in larger cities.
Steven Scott debunked the "I've got nothing to hide" argument on Privacy. Also an interesting discussion.
To finish the night, my good friend Amal Isaac gave a great talk on The Technological Singularity. He talked about an interesting future when, inevitably, computers surpass our intelligence.
All-in-all, it was another great event in the KW community! :)
Tuesday, June 14, 2011
Resumes for Programmers
How useful are resumes for programmers? I've read a few articles now (including this one entitled "Programmer Resumes Are Deprecated") that claim employers are much more interested in artifacts and evidence of your programming. Things like github accounts, personal projects, and development blogs.
I think part of the problem is when someone writes "Experienced with C#" on a resume, employers don't really know what that means. Without hard evidence to back you up, it's hard for employers to believe you. Perhaps more importantly, these skill levels are relative. I might think that I know C++ really well, when in reality, I only know a small part of the language well. I think these differences in perception are a pretty big problem in hiring developers.
Some skills are also really hard to "prove" on a resume. Sure if you put "Proficient in C#" and then list a bunch of jobs where you used C#, they are more likely to believe you, but how do you prove good object oriented design skills? Or knowledge of the SDLC? Or processes like Scrum? You could try to force some sentences about all these skills, but it will make your resume really long, and you'd still have to worry about the problem of what does proficient really mean?
A better solution might be to have a bunch of links to things like blogs and personal projects in your resume. This way when you say "Experienced with C#", your employer can check out what your "experienced C#" code actually looks like. Then they can make their own decision on your skill level, instead of trusting that what you mean by "experienced" is the same as what they mean.
I don't think we should just sack resumes all together, since I think it's a good way to summarize your skills for someone without a lot of time. However, I think the hiring decision should focus more on tangible projects that employers can see for themselves.
I think part of the problem is when someone writes "Experienced with C#" on a resume, employers don't really know what that means. Without hard evidence to back you up, it's hard for employers to believe you. Perhaps more importantly, these skill levels are relative. I might think that I know C++ really well, when in reality, I only know a small part of the language well. I think these differences in perception are a pretty big problem in hiring developers.
Some skills are also really hard to "prove" on a resume. Sure if you put "Proficient in C#" and then list a bunch of jobs where you used C#, they are more likely to believe you, but how do you prove good object oriented design skills? Or knowledge of the SDLC? Or processes like Scrum? You could try to force some sentences about all these skills, but it will make your resume really long, and you'd still have to worry about the problem of what does proficient really mean?
A better solution might be to have a bunch of links to things like blogs and personal projects in your resume. This way when you say "Experienced with C#", your employer can check out what your "experienced C#" code actually looks like. Then they can make their own decision on your skill level, instead of trusting that what you mean by "experienced" is the same as what they mean.
I don't think we should just sack resumes all together, since I think it's a good way to summarize your skills for someone without a lot of time. However, I think the hiring decision should focus more on tangible projects that employers can see for themselves.
Sunday, June 12, 2011
More on Mixed Reality Interfaces
Here's a few videos showing how the Mixed Reality Interfaces (MRI) work. As I mentioned before, our REAP team this term is exploring interesting uses for this technology.
This is the best video, I think:
There's a few more here and here.
Pretty cool stuff. Our REAP team is currently looking into getting a sample museum exhibit built using this technology, so we can demo it to some real users and see what they think.
In other news, rankings come out on Friday! Yay! I have 6 interviews before then, so it'll be a busy week. Other notable things next week include Ignite Waterloo (so stoked!) and an interesting sounding talk at uxWaterloo. There might be a midterm somewhere in there too, but that's considerably less interesting. :P
This is the best video, I think:
There's a few more here and here.
Pretty cool stuff. Our REAP team is currently looking into getting a sample museum exhibit built using this technology, so we can demo it to some real users and see what they think.
In other news, rankings come out on Friday! Yay! I have 6 interviews before then, so it'll be a busy week. Other notable things next week include Ignite Waterloo (so stoked!) and an interesting sounding talk at uxWaterloo. There might be a midterm somewhere in there too, but that's considerably less interesting. :P
Wednesday, June 8, 2011
Common Interview Question: Abstract Classes vs. Interfaces
I had an interview today where, yet again, I got asked the difference between an abstract class and an interface. In fact, I would estimate about 50% of the job interviews I've had for Java development have asked this exact question.
The answer is pretty straight forward. An interface defines some behaviour that can be added to an existing class. The class can choose how to implement that behaviour, but by implementing the interface, they are saying that they have some capability.
And abstract class isn't used to add capabilities to an existing class. Instead, it's meant to be a basis for future classes. Abstract classes can also do some things that interfaces can't, specifically have state and default method implementations.
If you have a job interview for a Java developer position, I recommend that you know how to answer this question.
The answer is pretty straight forward. An interface defines some behaviour that can be added to an existing class. The class can choose how to implement that behaviour, but by implementing the interface, they are saying that they have some capability.
And abstract class isn't used to add capabilities to an existing class. Instead, it's meant to be a basis for future classes. Abstract classes can also do some things that interfaces can't, specifically have state and default method implementations.
If you have a job interview for a Java developer position, I recommend that you know how to answer this question.
Ugly Concept Cars
Why is it that most concept cars that I see look just awful. Like this:
Or this
Or most of these!
They even managed to make an Aston Martin look bad! :(
Lamborghini Ankonian Concept
Or this
Pontiac Solstice Concept
Or most of these!
They even managed to make an Aston Martin look bad! :(
Aston Martin one-77
Saturday, June 4, 2011
REAP Projects
I thought that I'd update on what we are doing for the REAP project this term. We are working with Mixed Reality Interface (MRI) technology. Check out the videos (entitled MRI - demo) on their site for a quick demo of it's capabilities. It is essentially an interactive table that connects to an external display. Actions on this table are reflected on the external display. You can imagine having a "character" (think: lego man) moving around the surface of the table. In this context, the character's point of view would be displayed on the external monitor. If the table displays a floor plan, you can imagine the external monitor showing the point of view of the lego man in the room. Turning the the character on the table is equivalent to looking around the "room".
The table offers some other nice features. The table reads "barcodes" (really, just pieces of paper with patterns) placed on the table and then takes some action in the table or external monitor view. This lets us dynamically change what's happening on either the table or the display.
Our REAP project involves trying to find uses for this technology. We are mostly approaching this from a techonlogy-first design point of view.
The first business case we are exploring is home design. Imagine being to visualize and experience a 3D view of your house before it's even built. The barcode system allows for very quick customization, so it can really help home buyers/designers visualize various design combinations. For example, if you wanted to see what marble cabinets with a red paint room look like, you would simply throw those two barcodes on the table, and would be able to actually visualize how it looks from different points of view. Currently, home buyers have to do all this combination visualizing in their head. Needless to say, this is much less effective (especially if you're someone like me :S).
The second case we are pursuing is virtual museum exhibits. Imagine modeling an entire roman city and letting people walk through it and explore it any way they wanted. There would be various points in the virtual world where information could be displayed. Better yet, one could imagine rendering animations and movies, instead of just a static world. With a system like that, you could watch two dinosaurs fight, and then choose where to go next in the virtual world. How about making a shared world between many table? That way a whole class could be experiencing the same world in their own way, almost like an MMO game. We even played around with making your own exhibits by placing figures (with the barcodes on the bottom) on the table. The system would then interpret the objects on the table and make them interact.
In the future, we might want to switch up that external monitor for something like 3D cave technology. This would let us project a 3D world around the users to create an even more immersion experience. For now we are focusing on starting small though.
These are just some of the fields we are looking at right now. Technology like this is fairly general, so we can really apply this to literally almost every field. For that reason, we decided to pick a few and run with them. If we talk to the business users and find that they don't think it's useful, we can just move on to one of the other umpteen ideas we have. It's a fun way to work.
The people on the REAP project are all very cool (and talented!) people, so it's a lot of fun to work with them. There's also a bunch of free training (Agile training, presentation training, etc). All-in-all, it's a pretty great part time job. :P
The table offers some other nice features. The table reads "barcodes" (really, just pieces of paper with patterns) placed on the table and then takes some action in the table or external monitor view. This lets us dynamically change what's happening on either the table or the display.
Our REAP project involves trying to find uses for this technology. We are mostly approaching this from a techonlogy-first design point of view.
The first business case we are exploring is home design. Imagine being to visualize and experience a 3D view of your house before it's even built. The barcode system allows for very quick customization, so it can really help home buyers/designers visualize various design combinations. For example, if you wanted to see what marble cabinets with a red paint room look like, you would simply throw those two barcodes on the table, and would be able to actually visualize how it looks from different points of view. Currently, home buyers have to do all this combination visualizing in their head. Needless to say, this is much less effective (especially if you're someone like me :S).
The second case we are pursuing is virtual museum exhibits. Imagine modeling an entire roman city and letting people walk through it and explore it any way they wanted. There would be various points in the virtual world where information could be displayed. Better yet, one could imagine rendering animations and movies, instead of just a static world. With a system like that, you could watch two dinosaurs fight, and then choose where to go next in the virtual world. How about making a shared world between many table? That way a whole class could be experiencing the same world in their own way, almost like an MMO game. We even played around with making your own exhibits by placing figures (with the barcodes on the bottom) on the table. The system would then interpret the objects on the table and make them interact.
In the future, we might want to switch up that external monitor for something like 3D cave technology. This would let us project a 3D world around the users to create an even more immersion experience. For now we are focusing on starting small though.
These are just some of the fields we are looking at right now. Technology like this is fairly general, so we can really apply this to literally almost every field. For that reason, we decided to pick a few and run with them. If we talk to the business users and find that they don't think it's useful, we can just move on to one of the other umpteen ideas we have. It's a fun way to work.
The people on the REAP project are all very cool (and talented!) people, so it's a lot of fun to work with them. There's also a bunch of free training (Agile training, presentation training, etc). All-in-all, it's a pretty great part time job. :P
Friday, June 3, 2011
Sony Hacked Again
So it looks like Sony was hacked again. Things are not looking good for Sony. It's been almost two months since the original hack in April. Why does Sony still have unencrypted databases? Didn't they hire a bunch of security consultants after that first security compromise? I would imagine that "Encrypt your freakin' data" wouldn't have been one of the first things that these security experts would have said. So then why is this still a problem?
One of my friends thinks its a size problem. Sony has a lot of systems to fix, and the hackers are working faster than Sony developers. I'm told that things work very slowly in huge companies. While this might be part of the problem, I feel like there must be something else at play here. Sure protecting against SQL injections is hard (ish), but hashing data shouldn't be that bad. Perhaps there code is poorly written, and adding in data encryption is very hard to do. In any good system, there should just be one layer talking to the data directly. In such a system, making this change wouldn't be that hard. They would also have to hash all the existing data, but that is also easy script work.
Maybe it's an IT infrastructure problem. That is, encrypting data makes things much slower (conceivably twice as slow for data access), and maybe the Sony servers can't handle that extra load.
I also think about why this was such a huge security hole in the first place. Is it really because Sony doesn't have any security-conscious developers? I doubt it. It's a pretty popular subtopic in Software Engineering, so I'm sure someone on their payroll took the time to learn about it. It's not like your need a Masters in Security to know to encrypt sensitive data.
I think the developers were just lazy. It's certainly easier to develop and test a system without encryption. It was probably put on some todo list for later, but that later never came. The development team was probably more interested in starting on new projects or features, and they assumed no-one was really trying to break their system anyway. Maybe the developers wanted to implement these security features, but management didn't think it was worth the time and money.
I would really like to know how this all happened, but I don't think Sony will ever reveal the real reasons. I do know that it must be..."fun"...to be working at Sony right now.
One of my friends thinks its a size problem. Sony has a lot of systems to fix, and the hackers are working faster than Sony developers. I'm told that things work very slowly in huge companies. While this might be part of the problem, I feel like there must be something else at play here. Sure protecting against SQL injections is hard (ish), but hashing data shouldn't be that bad. Perhaps there code is poorly written, and adding in data encryption is very hard to do. In any good system, there should just be one layer talking to the data directly. In such a system, making this change wouldn't be that hard. They would also have to hash all the existing data, but that is also easy script work.
Maybe it's an IT infrastructure problem. That is, encrypting data makes things much slower (conceivably twice as slow for data access), and maybe the Sony servers can't handle that extra load.
I also think about why this was such a huge security hole in the first place. Is it really because Sony doesn't have any security-conscious developers? I doubt it. It's a pretty popular subtopic in Software Engineering, so I'm sure someone on their payroll took the time to learn about it. It's not like your need a Masters in Security to know to encrypt sensitive data.
I think the developers were just lazy. It's certainly easier to develop and test a system without encryption. It was probably put on some todo list for later, but that later never came. The development team was probably more interested in starting on new projects or features, and they assumed no-one was really trying to break their system anyway. Maybe the developers wanted to implement these security features, but management didn't think it was worth the time and money.
I would really like to know how this all happened, but I don't think Sony will ever reveal the real reasons. I do know that it must be..."fun"...to be working at Sony right now.
Thursday, May 26, 2011
EHR Usability
I just read this article that talks about usability with EHRs. The writer says that EHR systems are too difficult for non-technical physicians to pick up and use on a daily basis. He claims that a lot of older physicians aren't using EHR systems because they don't know how to use computers well enough. He also claims that the government shouldn't force EHR systems on physicians.
While I agree that usability should be a huge concern for software developers, I think that the writer is a little extreme in thinking that the software is too hard to use. There's isn't much you can do at the software level if your user isn't comfortable with a mouse. At Karos Health, we put a lot of effort in creating easy to use software that would be intuitive for all users, with some good feedback from a lot of people. It's clear that there are plenty of physicians that have no problem with using software
We can't just let doctors do things the "stupid" way just because they don't want to learn something new. If they didn't learn new things we wouldn't have any medical imaging and we'd still be using whiskey as an anesthesia agent. Clearly this isn't the case.
So the question is should we allow physicians do things one way, when there is a better way? Especially if the better solution can reduce serious errors. There are other reasons why some doctors might not want to use EHRs, but should a learning curve be one of them?
I don't think so. I think it's very important for physicians to keep up with the times. Systems like EHRs are allowing physicians to do things that were once very difficult or impossible. They save lots of time and money, which ultimately leads to better service. The cost of an EHR is quickly made up when you consider the money you save by not hiring someone to collate paper charts and dealing with rooms full of paper files. It's also provides more security and helps reduce errors.
I agree with mandating use of EHRs, but I also think that software designers need to think more closely about usability, especially for less technical users.
While I agree that usability should be a huge concern for software developers, I think that the writer is a little extreme in thinking that the software is too hard to use. There's isn't much you can do at the software level if your user isn't comfortable with a mouse. At Karos Health, we put a lot of effort in creating easy to use software that would be intuitive for all users, with some good feedback from a lot of people. It's clear that there are plenty of physicians that have no problem with using software
We can't just let doctors do things the "stupid" way just because they don't want to learn something new. If they didn't learn new things we wouldn't have any medical imaging and we'd still be using whiskey as an anesthesia agent. Clearly this isn't the case.
So the question is should we allow physicians do things one way, when there is a better way? Especially if the better solution can reduce serious errors. There are other reasons why some doctors might not want to use EHRs, but should a learning curve be one of them?
I don't think so. I think it's very important for physicians to keep up with the times. Systems like EHRs are allowing physicians to do things that were once very difficult or impossible. They save lots of time and money, which ultimately leads to better service. The cost of an EHR is quickly made up when you consider the money you save by not hiring someone to collate paper charts and dealing with rooms full of paper files. It's also provides more security and helps reduce errors.
I agree with mandating use of EHRs, but I also think that software designers need to think more closely about usability, especially for less technical users.
Wednesday, May 25, 2011
Design Strategies
I see two ways of designing a product.
The first is Technology-first design. This is where the group has a specific technology that provides some capabilities. The team takes a lot of time to flesh out exactly what the system is capable of doing, and how. This discussion gets pretty detailed (like talking about UX or implementation details). Once the team has a very good idea of how the technology can be used, they try to find a market for it. They try to "shop" around for problems in any industry that might be served well by this technology.
The other is User-first design. Here the group knows the overall capabilities of a technology, but they don't discuss the all the details. Instead, they focus on finding users first and then adapt the technology to the problem (instead of the other way around). Here, the group spends a lot of time discussing various markets and their problems. They talk to customers before they conduct in depth research into the technology itself.
Obviously, a successful project will need to consider both the use cases and the technological details, but the question is which one should a team consider first. In REAP, it seems that we are doing Technolgy-first design. That is, we are trying to fit a problem to our technology instead of the other way around.
While this approach is fine in general, I find that it might cause comprises in the final solution. If the team is focused on the details of the technology, they might be more inclined to morph the problem (and solution) to match the technology. A better solution would be to morph the technology to match the problem. This creates a better solution to the problem, since it is focused on user needs.
The first is Technology-first design. This is where the group has a specific technology that provides some capabilities. The team takes a lot of time to flesh out exactly what the system is capable of doing, and how. This discussion gets pretty detailed (like talking about UX or implementation details). Once the team has a very good idea of how the technology can be used, they try to find a market for it. They try to "shop" around for problems in any industry that might be served well by this technology.
The other is User-first design. Here the group knows the overall capabilities of a technology, but they don't discuss the all the details. Instead, they focus on finding users first and then adapt the technology to the problem (instead of the other way around). Here, the group spends a lot of time discussing various markets and their problems. They talk to customers before they conduct in depth research into the technology itself.
Obviously, a successful project will need to consider both the use cases and the technological details, but the question is which one should a team consider first. In REAP, it seems that we are doing Technolgy-first design. That is, we are trying to fit a problem to our technology instead of the other way around.
While this approach is fine in general, I find that it might cause comprises in the final solution. If the team is focused on the details of the technology, they might be more inclined to morph the problem (and solution) to match the technology. A better solution would be to morph the technology to match the problem. This creates a better solution to the problem, since it is focused on user needs.
Labels:
Project Management,
REAP,
Thoughts,
User Interface
Monday, May 23, 2011
New York!
I went to Sackets, New York with Dani this long weekend! I got to go to my very first wedding! It was so much fun!
I had scrum training with Declan Whelan as part of REAP on Friday. It ended at 7pm, so we had to leave pretty late to New York. The training was great! I'm very interested in seeing how the REAP team can apply Scrum, and it's concepts, to a non-software project. The whole team seemed to be very interested in pursing the idea of using Scrum for the project, so we'll probably get more training in the future. Awesome!
Anyway, we left Waterloo pretty late. We stopped by Toronto to get my passport (border laws >_<), and then chugged along to our hotel in Watertown, NY. We got there around 1:30am. I learned (slash remembered) that King sized beds are not a little bigger than a Queen sized bed. They are much, much bigger. It was awesome. Of course the first thing we did was jump on the bed. Go us.
Next morning, we went to the wedding in Sackets. They had a small (40 people) ceremony under a gazebo overlooking the harbor. It was quite pretty! The whole thing only lasted about 10 minutes. A good introduction to weddings for me. :P
After that, Dani and I sneaked off to eat some food, while everyone else took buttloads of pictures. It was good. Then we went to the reception...somewhere. lol. That was also fun. We actually got a chance to talk with the bride and groom. It was nice of them to take time to talk to all the guests. :)
After the wedding, we were both exhausted, so we went back to the hotel to relax. Dani fell asleep for 2 hours and then couldn't get to bed until 5am. Sweet deal.
Sunday was mostly uneventful. In the morning (well, "morning" = noon), we went for a drive around Sackets. It's a really nice city. It reminds me of cottage country. It's fun to see how american flags there are in that city. Literally every pole has a flag on it. You never see that in Canada.
After our morning joy ride, we went back to the hotel, watched a movie (My Big Fat Greek Wedding!), and then went on another late night joy ride! We went on a quest to find an open Dunkin' Dodo's (tm). We got lost. Surprise. It was another fun drive though.
All in all, it was a great weekend. My first wedding was excellent! Yay! :)
I had scrum training with Declan Whelan as part of REAP on Friday. It ended at 7pm, so we had to leave pretty late to New York. The training was great! I'm very interested in seeing how the REAP team can apply Scrum, and it's concepts, to a non-software project. The whole team seemed to be very interested in pursing the idea of using Scrum for the project, so we'll probably get more training in the future. Awesome!
Anyway, we left Waterloo pretty late. We stopped by Toronto to get my passport (border laws >_<), and then chugged along to our hotel in Watertown, NY. We got there around 1:30am. I learned (slash remembered) that King sized beds are not a little bigger than a Queen sized bed. They are much, much bigger. It was awesome. Of course the first thing we did was jump on the bed. Go us.
Next morning, we went to the wedding in Sackets. They had a small (40 people) ceremony under a gazebo overlooking the harbor. It was quite pretty! The whole thing only lasted about 10 minutes. A good introduction to weddings for me. :P
After that, Dani and I sneaked off to eat some food, while everyone else took buttloads of pictures. It was good. Then we went to the reception...somewhere. lol. That was also fun. We actually got a chance to talk with the bride and groom. It was nice of them to take time to talk to all the guests. :)
After the wedding, we were both exhausted, so we went back to the hotel to relax. Dani fell asleep for 2 hours and then couldn't get to bed until 5am. Sweet deal.
Sunday was mostly uneventful. In the morning (well, "morning" = noon), we went for a drive around Sackets. It's a really nice city. It reminds me of cottage country. It's fun to see how american flags there are in that city. Literally every pole has a flag on it. You never see that in Canada.
After our morning joy ride, we went back to the hotel, watched a movie (My Big Fat Greek Wedding!), and then went on another late night joy ride! We went on a quest to find an open Dunkin' Dodo's (tm). We got lost. Surprise. It was another fun drive though.
All in all, it was a great weekend. My first wedding was excellent! Yay! :)
Monday, May 16, 2011
Release Early, Release Often...Carefully
There's a lot of talk about the idea of releasing software early and often. The idea is that you get software out to your users faster, so you can get their feedback faster. Releasing software often also has the benefit of keeping your users constantly involved in the development of the application. In general, "Release early, release often" is a great idea... if you do it well.
The caveat is that people need to be really careful with what they release. It is okay to release a new version of your software with just one new feature. Releases lacking features are fine. Quality, on the other hand, should never be compromised. That is, you should never release untested software or software with major known problems. You should never think "That's okay, we'll just fix the bugs in the next release. We release often anyways". I find this incredibly annoying behaviour from software companies. Not only is it very frustrating, but it also makes me question the professionalism of the company. Releasing untested programs is never acceptable. It makes a very negative impact on the user's view on the company's stance on application quality. Users understand when your program is still in it's infancy and is missing features, but they shouldn't have to deal with bug-filled applications.
Keep that in mind when you are considering a release of your software. Quality should never be something that is put off for a later release.
The caveat is that people need to be really careful with what they release. It is okay to release a new version of your software with just one new feature. Releases lacking features are fine. Quality, on the other hand, should never be compromised. That is, you should never release untested software or software with major known problems. You should never think "That's okay, we'll just fix the bugs in the next release. We release often anyways". I find this incredibly annoying behaviour from software companies. Not only is it very frustrating, but it also makes me question the professionalism of the company. Releasing untested programs is never acceptable. It makes a very negative impact on the user's view on the company's stance on application quality. Users understand when your program is still in it's infancy and is missing features, but they shouldn't have to deal with bug-filled applications.
Keep that in mind when you are considering a release of your software. Quality should never be something that is put off for a later release.
Schedule Changes!
Some of you may or may not know that my schedule has been really messed up this semester. I started with 4 courses, one of which was Real-Time. I switched CO 480 (History of Mathematics) for CS 456 (Networks), because history of math was really interesting, but a lot of work. I am currently sitting in on the course though. Steven Furino, the history professor, is a very interesting person to listen to.
Then I realized that taking 4, fourth year CS courses including RT was stupid. Some of my friends in RT were only taking 2-3 other classes, all bird-y like History of Film. When I realized that 4 CS courses was silly, I dropped Distributed.
Then the weekend came. RT A0 was due Monday, and I spent most of my spare time in the RT lab. This is after spending 5-10 hours in there per day since the first day of classes. >_< I got most of A0 done, which was rewarding, I guess.
While Real-Time is very interesting and satisfying, the amount of work is unreal. It's sort of hard to convey. I suspect that it's more work than I did in all of first and second year combined. I could do the course, but I would have to give up literally everything else in my life, including other classes (and sleep!). I am also more interested in the material in my other courses (especially Networks/Distributed) than the Real-Time material. It seemed weird that I would miss out of learning that material for a subject that I didn't really care about in the first place. I didn't think it was worth it, so I dropped Real-Time.
Instead of Real-Time, I decided to pick up Distributed (again) and UI. Unfortunately UI was full, so I only got into Distributed.
So my final schedule, after 2 weeks of shuffling and being on wait lists, is Architecture, Networks, and Distributed.
I would have liked to take 4 courses, but it feels sort of nice to take only 3. I will pretty much treat this as a summer vacation. :P I could use a break, and I'd like to have time to actually enjoy summer for the first time in 3 years. I also have REAP, so I'll still be busy enough, but after that week of real-time, my other course loads seem pretty light.
I am a little sad about dropping Real-Time, but I'm also very relieved. I will actually learn stuff in my other courses now, and enjoy my semester. I think that I made the right choice.
Now if you'll excuse me, I have a sushi date with the Girl.
Then I realized that taking 4, fourth year CS courses including RT was stupid. Some of my friends in RT were only taking 2-3 other classes, all bird-y like History of Film. When I realized that 4 CS courses was silly, I dropped Distributed.
Then the weekend came. RT A0 was due Monday, and I spent most of my spare time in the RT lab. This is after spending 5-10 hours in there per day since the first day of classes. >_< I got most of A0 done, which was rewarding, I guess.
While Real-Time is very interesting and satisfying, the amount of work is unreal. It's sort of hard to convey. I suspect that it's more work than I did in all of first and second year combined. I could do the course, but I would have to give up literally everything else in my life, including other classes (and sleep!). I am also more interested in the material in my other courses (especially Networks/Distributed) than the Real-Time material. It seemed weird that I would miss out of learning that material for a subject that I didn't really care about in the first place. I didn't think it was worth it, so I dropped Real-Time.
Instead of Real-Time, I decided to pick up Distributed (again) and UI. Unfortunately UI was full, so I only got into Distributed.
So my final schedule, after 2 weeks of shuffling and being on wait lists, is Architecture, Networks, and Distributed.
I would have liked to take 4 courses, but it feels sort of nice to take only 3. I will pretty much treat this as a summer vacation. :P I could use a break, and I'd like to have time to actually enjoy summer for the first time in 3 years. I also have REAP, so I'll still be busy enough, but after that week of real-time, my other course loads seem pretty light.
I am a little sad about dropping Real-Time, but I'm also very relieved. I will actually learn stuff in my other courses now, and enjoy my semester. I think that I made the right choice.
Now if you'll excuse me, I have a sushi date with the Girl.
Wednesday, May 11, 2011
EMRs and Privacy
Today I got the chance to talk with a doctor at UW. I asked him what he though of EHRs that enabled data sharing between health care providers. He brought up some interesting points.
Basically, he didn't think it would be that useful for doctors. In fact, he thought that it would have some negative consequences. Specifically, he claimed that people would not be truthful if they knew a lot of people might have access to that information. Would you answer truthfully if someone asked you how many sexual partners you've had, if you knew a lot of people might have access to that information? Apparently, people are hesitant to give out that information even when they know that only the doctor will know about it. He thought that a more available EHR would just create more falsified records. This would make the EHRs unreliable.
He brought up another point about logistics. Where do you store this EHR? Do you associate it with your health card? Well in our case, that would only work in Ontario. Further, what happens when you lose your health card? I think the industry uses EMPIs for this right now.
This is coming from a doctor that's been working with these healthcare systems for over 30 years. It's an interesting point of view.
I think a lot of these problems can be solved by thinking carefully about the privacy concerns with EHRs. Patients should have the power to specify which information is available for others to see. I think a lot of these concerns may be solved by storing all the information with the patients. That way, patients are in control of their health care records.
In any case, I'm very curious to see how this pans out. I certainly don't have a solution for how to solve these problems, but I'm sure something interesting will emerge within the next 5-10 years.
Tuesday, May 3, 2011
Start Of The Term
The term started yesterday, and things are already getting busy. I think my blogging frequency is going to be much, much lower this semester. :/
The first Real-Time assignment is out, and it's very intimidating: 7 days to create a command line interface for the trains, including real-time displays for a lot of data about the train system. All this without an operating system. This is going to be a hell of a semester. Lessons learned so far:
1) makefiles suck
2) operating systems are useful
I changed my schedule a little. For one, I dropped History of Mathematics. Although it seems very interesting, I don't think that I could handle the work load with RT. Apparently the course involves a lot of researching and writing. I will however, sit in on the class. Listening to Steven Furino lecture is a pleasure, and I have a lot of time to kill between classes on those days. I also might pick up Networks. I'm currently on the waiting list for it, but I don't know if I actually want to take 4 courses or not. I've heard that networks is a pretty easy course, so we'll see.
The first REAP meeting is today too. I'm looking forward to meeting the whole team. I'm also looking forward to playing with these Microtiles. I watch about an hour long training video on them yesterday, but that's not as fun (or educational) as actually playing with them.
Looks like this will be a very busy, but fun, semester!
The first Real-Time assignment is out, and it's very intimidating: 7 days to create a command line interface for the trains, including real-time displays for a lot of data about the train system. All this without an operating system. This is going to be a hell of a semester. Lessons learned so far:
1) makefiles suck
2) operating systems are useful
I changed my schedule a little. For one, I dropped History of Mathematics. Although it seems very interesting, I don't think that I could handle the work load with RT. Apparently the course involves a lot of researching and writing. I will however, sit in on the class. Listening to Steven Furino lecture is a pleasure, and I have a lot of time to kill between classes on those days. I also might pick up Networks. I'm currently on the waiting list for it, but I don't know if I actually want to take 4 courses or not. I've heard that networks is a pretty easy course, so we'll see.
The first REAP meeting is today too. I'm looking forward to meeting the whole team. I'm also looking forward to playing with these Microtiles. I watch about an hour long training video on them yesterday, but that's not as fun (or educational) as actually playing with them.
Looks like this will be a very busy, but fun, semester!
Saturday, April 30, 2011
Spring!
Yesterday was my last day at Karos Health. It was an excellent term with amazing people, and innovative software and technology. I'll definitely miss working there. They gave me a remote controlled helicopter as a parting gift.
How cool is that!
In other news, I'm quite excited to start my next semester. It'll be my first Spring school semester, and I hear the campus is awesome this time of year. I'll try to enjoy as much of it as I can with Real-Time. :P I will be taking:
- Software Design & Architecture
- History of Mathematics
- Real-Time Programming
- Distributed Systems
Should be a fun semester. :) I found out that one of my co-workers taught my
Distributed Prof. :P
I also have REAP next semester, which should help me meet my excitement quota for
the semester.
In other news, the Ignite Waterloo for Spring has been announce. I went to the one in Winter,
and it was excellent! I highly recommend that you check it out, if you get the chance.
How cool is that!
In other news, I'm quite excited to start my next semester. It'll be my first Spring school semester, and I hear the campus is awesome this time of year. I'll try to enjoy as much of it as I can with Real-Time. :P I will be taking:
- Software Design & Architecture
- History of Mathematics
- Real-Time Programming
- Distributed Systems
Should be a fun semester. :) I found out that one of my co-workers taught my
Distributed Prof. :P
I also have REAP next semester, which should help me meet my excitement quota for
the semester.
In other news, the Ignite Waterloo for Spring has been announce. I went to the one in Winter,
and it was excellent! I highly recommend that you check it out, if you get the chance.
Tuesday, April 26, 2011
Importance of Data Visualization In Agile Teams
I went to an Agile P2P meeting after work today. The speaker was Jason Little, an agile coach. He seems like a very interesting guy. I like how he doesn't seem to get bogged down by what "agile" says you should do, but rather he focuses on getting actual results. It's refreshing to see.
Anyway, he was talking about the importance of data visualization in agile teams. By displaying data in concise and intelligent ways, major problems become much more evident. Problems that are hidden by poor data presentation can become glaringly obvious when you display the information in the "right" way. Uncovering these problems is a huge part of improving what you do as an organization. For example, take stories in your electronic issue tracker. If you have a lot of bugs in your issue tracker, you might not see it right away because of the way they're organized. If instead you put all your issues on coloured sticky notes on a board, and you see a clump of red tickets, it becomes immediately obvious that your software development process might have serious quality gaps. If you further organize those tickets by time, you can visually see when a lot of those bugs were discovered (and can guess when they were introduced).
The importance of data visualizations doesn't only apply to agile though. Data visualization is important in a lot of fields. Humans suck at looking a unorganized data and making sense of it. Computers are much better at this. Data visualization will be very important in the future to help humans make sense of the immense amount of knowledge available in some fields. Like in the agile example above, some problems might become glaringly obvious if you organize and display the information in the "correct" way. The cure to cancer might be hiding in the data, and it's just a matter of showing it the right way before it jumps out to someone.
Okay. I'll stop procrastinating now and finish off that work report.....
....
or watch Community.
Anyway, he was talking about the importance of data visualization in agile teams. By displaying data in concise and intelligent ways, major problems become much more evident. Problems that are hidden by poor data presentation can become glaringly obvious when you display the information in the "right" way. Uncovering these problems is a huge part of improving what you do as an organization. For example, take stories in your electronic issue tracker. If you have a lot of bugs in your issue tracker, you might not see it right away because of the way they're organized. If instead you put all your issues on coloured sticky notes on a board, and you see a clump of red tickets, it becomes immediately obvious that your software development process might have serious quality gaps. If you further organize those tickets by time, you can visually see when a lot of those bugs were discovered (and can guess when they were introduced).
The importance of data visualizations doesn't only apply to agile though. Data visualization is important in a lot of fields. Humans suck at looking a unorganized data and making sense of it. Computers are much better at this. Data visualization will be very important in the future to help humans make sense of the immense amount of knowledge available in some fields. Like in the agile example above, some problems might become glaringly obvious if you organize and display the information in the "correct" way. The cure to cancer might be hiding in the data, and it's just a matter of showing it the right way before it jumps out to someone.
Okay. I'll stop procrastinating now and finish off that work report.....
....
or watch Community.
Monday, April 25, 2011
SOAP And REST
For my work report this semester, I decided to talk about RESTful web APIs and how they compare to SOAP based APIs. Conclusion: SOAP sucks. Here's a pretty interesting, fictional conversation that pretty much summarizes why SOAP is generally awful. Basically, SOAP rebuilds a lot of what's in HTTP from scratch. And to top it off, they use HTTP just as a transportation protocol, ignoring all the application level protocol stuff.
This is one of the biggest reasons why RESTful APIs are better than SOAP equivalents. For one, they use more than just HTTP PUT, so they get some free perks from HTTP (like cache-able GET calls). Because RESTful APIs are build as a thin layer over HTTP, you can even use your browser for testing. In practice, this has saved me loads of time. RESTful APIs also tend to actually use things like HTTP status codes, instead of reinventing error handling like SOAP does. SOAP just returns HTTP 200 (which indicates success in HTTP world), but the response body might contain an error. What? Who thought that was a good idea?
In general, RESTful APIs are simpler (partially because they reuse parts of HTTP) than SOAP. They don't have to have every aspect XML-encoded, and there's no need to for something like WSDL (which, I should mention, is a clusterfuck).
So when you have a choice, always go with RESTful APIs. There are occasionally times where you might need to use SOAP (like when you need something like WS-Security), but for almost all applications, RESTful APIs are a much smarter choice.
So here's the 200-word version of my 2000-word work report (and I got to use the work "clusterfuck"!). :)
This is one of the biggest reasons why RESTful APIs are better than SOAP equivalents. For one, they use more than just HTTP PUT, so they get some free perks from HTTP (like cache-able GET calls). Because RESTful APIs are build as a thin layer over HTTP, you can even use your browser for testing. In practice, this has saved me loads of time. RESTful APIs also tend to actually use things like HTTP status codes, instead of reinventing error handling like SOAP does. SOAP just returns HTTP 200 (which indicates success in HTTP world), but the response body might contain an error. What? Who thought that was a good idea?
In general, RESTful APIs are simpler (partially because they reuse parts of HTTP) than SOAP. They don't have to have every aspect XML-encoded, and there's no need to for something like WSDL (which, I should mention, is a clusterfuck).
So when you have a choice, always go with RESTful APIs. There are occasionally times where you might need to use SOAP (like when you need something like WS-Security), but for almost all applications, RESTful APIs are a much smarter choice.
So here's the 200-word version of my 2000-word work report (and I got to use the work "clusterfuck"!). :)
Labels:
REST,
SOAP,
Software Engineering,
Web Development
Saturday, April 16, 2011
How I spend my 9-5
For the last two weeks at work, I've been working on a new team, developing an application called Rialto Consult. For those of you that are curious, you can read about it here.
Basically, the application allows physicians in one physical location to create radiology orders at a different location. One typical use case could be something like this: Hospital A runs a 24/7 radiology reading service. Hospital B, C, ... , Z have 24/7 emergency response departments, but unfortunately they don't have any radiologists on site overnight. So while these hospitals can capture radiology images, they do not have anyone to read them. Thankfully, Hospital A wants to offer their radiology reading service to these other hospitals. Right now, the workflow goes something like this: Someone comes into Hospital B at 2AM with some emergency. The hospital decides that they need some images taken and read. Once the hospital captures the images, they will fax an order over to Hospital A. Assuming Hospital A gets the order without any problems (fax machines suck), their radiologists will start reading the images. Often, having access to previous images ("relevant priors") is very useful, so the radiologist calls Hospital B and requests some images. These are sent over. Once the radiologist has enough information, they'll read the images and write(or more likely, dictate) a report summarizing their findings. That report gets faxed back to Hosptial A, where they decided what to do next. The whole process is complex, unreliable, and slow.
Now with Rialto Consult, the workflow becomes much more seamless. In many ways, the experience is indistinguishable from both parties being in the same physical building, even if they're in different cities (or even countries!). Essentially Consult offers a shared worklist. Both hospitals see the same worklist of radiology orders and their states in the workflow. When Hospital B wants a read done, they simply create an order in the system. That order is automatically sent electronically to Hospital A, along with a summary of the patient's history (including those very useful relevant prior images). Hospital A can then view the images, see the patient's medical history, and get relevant prior images. The radiologist can do all this from their workstation without having to call anyone. The radiologist's report is also automatically transfered to the original hospital, where they are notified of the results immediately. The entire process is much simpler, more reliable, and more cost effective.
The software is very cool and solves a real problem in the industry. The specific section I've been working has to do with audit records. When anyone does anything with the system or your patient information, an audit record is generated. There might be as many as 40 audit records generated for one patient to go through the workflow I described above, so you can imagine that there are literally millions of these records to deal with. I was working on a system to store and display these records in an intelligent way.
Okay. Maybe I should actually work on that work report now instead of randomly blogging. :P
Basically, the application allows physicians in one physical location to create radiology orders at a different location. One typical use case could be something like this: Hospital A runs a 24/7 radiology reading service. Hospital B, C, ... , Z have 24/7 emergency response departments, but unfortunately they don't have any radiologists on site overnight. So while these hospitals can capture radiology images, they do not have anyone to read them. Thankfully, Hospital A wants to offer their radiology reading service to these other hospitals. Right now, the workflow goes something like this: Someone comes into Hospital B at 2AM with some emergency. The hospital decides that they need some images taken and read. Once the hospital captures the images, they will fax an order over to Hospital A. Assuming Hospital A gets the order without any problems (fax machines suck), their radiologists will start reading the images. Often, having access to previous images ("relevant priors") is very useful, so the radiologist calls Hospital B and requests some images. These are sent over. Once the radiologist has enough information, they'll read the images and write(or more likely, dictate) a report summarizing their findings. That report gets faxed back to Hosptial A, where they decided what to do next. The whole process is complex, unreliable, and slow.
Now with Rialto Consult, the workflow becomes much more seamless. In many ways, the experience is indistinguishable from both parties being in the same physical building, even if they're in different cities (or even countries!). Essentially Consult offers a shared worklist. Both hospitals see the same worklist of radiology orders and their states in the workflow. When Hospital B wants a read done, they simply create an order in the system. That order is automatically sent electronically to Hospital A, along with a summary of the patient's history (including those very useful relevant prior images). Hospital A can then view the images, see the patient's medical history, and get relevant prior images. The radiologist can do all this from their workstation without having to call anyone. The radiologist's report is also automatically transfered to the original hospital, where they are notified of the results immediately. The entire process is much simpler, more reliable, and more cost effective.
The software is very cool and solves a real problem in the industry. The specific section I've been working has to do with audit records. When anyone does anything with the system or your patient information, an audit record is generated. There might be as many as 40 audit records generated for one patient to go through the workflow I described above, so you can imagine that there are literally millions of these records to deal with. I was working on a system to store and display these records in an intelligent way.
Okay. Maybe I should actually work on that work report now instead of randomly blogging. :P
Friday, April 15, 2011
Life Dilemma #8
Today was FedEx day at work. Basically, It's a free day (24 hours) to work on whatever you want. The motto is "Deliver Overnight", hence the name. I started by playing with SmartGWT, a very comprehensive GWT framework. Check out their showcase. It's pretty impressive. Has anyone ever used it before? I'd love to hear your thoughts. At some point I took a break from SmartGWT and helped a colleague work on the software the validates the software licenses. That was a very interesting and different project. I feel sorry for the compiler that had to compile the code I produced. There was some effort to obfuscate the code. It was a fun project, but I feel dirty for violating every coding practice I've ever learned. :P
This last week got me thinking about what I want to do next co-op term. Working at Karos Health has been an amazing opportunity. I enjoy working with all my co-workers, and I have a lot of fun at work. The people are all very passionate and skilled at what they do, so it's a real pleasure to work with them. There's a lot of support for professional development. For one, everyone at the company seems immensely talented, so just working with everyone on a daily basis provides a lot of opportunity for learning. On top of that, the company invests a lot in professional development in the form of lunch and learns, book clubs, and trips to various UX and Agile P2P meetings. The actual software that we build is also very cool. We are coming out with products that don't exist in the market yet! It's very exciting stuff.
The dilemma is whether or not I want to go back there for my next co-op term. I think co-op is really awesome because you get a chance to work for up to 6 different organizations, all with different people, using different tools, and in different business markets. This is an incredible learning opportunity. I thought that I would learn more if I were to work somewhere else, but now I'm not so sure. I'll certainly get a chance to work with a brand new group of people, but I don't know if it's worth leaving such a fun job behind. If I do go back to Karos, I would like to work on server-side code. If I do that, I would be able to work with a (slightly) different group of people in the company, and use different tools. I guess I'll have to think about this a lot more in the upcoming months. I guess this isn't really a bad dilemma to have.
In other news, I got an offer for a UW REAP position today. I am very excited to work on that project next semester. It will be an incredibly busy semester with Real-Time, but it should be one of the most interesting semesters so far. I'm excited (and scared).
This last week got me thinking about what I want to do next co-op term. Working at Karos Health has been an amazing opportunity. I enjoy working with all my co-workers, and I have a lot of fun at work. The people are all very passionate and skilled at what they do, so it's a real pleasure to work with them. There's a lot of support for professional development. For one, everyone at the company seems immensely talented, so just working with everyone on a daily basis provides a lot of opportunity for learning. On top of that, the company invests a lot in professional development in the form of lunch and learns, book clubs, and trips to various UX and Agile P2P meetings. The actual software that we build is also very cool. We are coming out with products that don't exist in the market yet! It's very exciting stuff.
The dilemma is whether or not I want to go back there for my next co-op term. I think co-op is really awesome because you get a chance to work for up to 6 different organizations, all with different people, using different tools, and in different business markets. This is an incredible learning opportunity. I thought that I would learn more if I were to work somewhere else, but now I'm not so sure. I'll certainly get a chance to work with a brand new group of people, but I don't know if it's worth leaving such a fun job behind. If I do go back to Karos, I would like to work on server-side code. If I do that, I would be able to work with a (slightly) different group of people in the company, and use different tools. I guess I'll have to think about this a lot more in the upcoming months. I guess this isn't really a bad dilemma to have.
In other news, I got an offer for a UW REAP position today. I am very excited to work on that project next semester. It will be an incredibly busy semester with Real-Time, but it should be one of the most interesting semesters so far. I'm excited (and scared).
Thursday, April 7, 2011
Git >:@
While git is a very cool and powerful version control tool, it's command line UI is just awful. The fact that there are no fully-featured GUI alternatives(that don't suck) makes things even worse.
Let's talk about staging. I would say that a staging area is not useful in the majority of cases. Why is the default behaviour to force a staging process? Most of the time it's just an annoying "durr. stage everything please" step before you commit.
Then there's the actual commands. Want to add a file to be tracked?
git add <file>
Want to stage an already tracked file?
git add <file>
Why is it necessary to overload this command? At least these make some sense. How about unstaging?
git reset HEAD <file that's staged>
Really? reset? Why would you choose that instead of, you know, unstage!
Want to revert back to the previous commit? git revert would make sense... Too bad it's
git checkout -- .
Checking out previous commits makes sense, but the fact that you have to do it by copying and pasting an SHA-1 hash code sucks.
How about untracking and removing files. Well there's:
git rm <file>
which will remove the file from your working directory, and untrack it. There's also
git rm --cached <file>
which will just remove the file from git, but keep the actual file in your working directory. Why --cached is the flag they choose, I'll never know. What's wrong with --tracked?
I could go on and on. I like git because of how powerful it is, but I hate how many usability problems it has. It makes the learning curve much steeper. I've been using it for 4 months at work now, and it still throws me off every once in a while because of how unintuitive it is.
Let's talk about staging. I would say that a staging area is not useful in the majority of cases. Why is the default behaviour to force a staging process? Most of the time it's just an annoying "durr. stage everything please" step before you commit.
Then there's the actual commands. Want to add a file to be tracked?
git add <file>
Want to stage an already tracked file?
git add <file>
Why is it necessary to overload this command? At least these make some sense. How about unstaging?
git reset HEAD <file that's staged>
Really? reset? Why would you choose that instead of, you know, unstage!
Want to revert back to the previous commit? git revert would make sense... Too bad it's
git checkout -- .
Checking out previous commits makes sense, but the fact that you have to do it by copying and pasting an SHA-1 hash code sucks.
How about untracking and removing files. Well there's:
git rm <file>
which will remove the file from your working directory, and untrack it. There's also
git rm --cached <file>
which will just remove the file from git, but keep the actual file in your working directory. Why --cached is the flag they choose, I'll never know. What's wrong with --tracked?
I could go on and on. I like git because of how powerful it is, but I hate how many usability problems it has. It makes the learning curve much steeper. I've been using it for 4 months at work now, and it still throws me off every once in a while because of how unintuitive it is.
New Projects At Work
I haven't been posting as often as I usually do, mostly because I've been extra busy at work. I started working on a different project this week with my team. It's very exciting, since this is a production application that will be used by real institutions very very soon. The previous project I was working on was a research project, so obviously the quality and usability requirements are very different. I can't say things like "We'll deal with that later" anymore. :P Not too mention it has to play nice with other vendors. Let me tell you. This is not an easy thing to accomplish. The standards that exist are not as useful as I would expect.
I also need to start working on that silly work term report soon. I'm writing it on RESTful API design. I've worked on half a dozen of these APIs this work term, so I think it should be fairly straightforward to write. Hopefully it won't be too painful.
I also need to start working on that silly work term report soon. I'm writing it on RESTful API design. I've worked on half a dozen of these APIs this work term, so I think it should be fairly straightforward to write. Hopefully it won't be too painful.
Friday, April 1, 2011
A Lot To Learn
Today at Karos Health, we had a retrospection with Declan Whelan, an Agile coach in the Waterloo area. I thought that it was very interesting. I came out feeling like I still have so much to learn about this industry. I pretty much feel this way every couple months. :P It's a weird feeling. I think I'll come out of university feeling much stupider than I felt when I came in. I guess having an awareness of what you need to learn is pretty important though.
So here's a list of things I would like to work on; my personal improvement backlog. :P Priority to be determined.
- Test Driven Development. I feel like won't really understand it until I actually spend a few weeks doing that. I am still unconvinced of it's benefits, but I think the best way to really decide its effectiveness is to actually practice it.
- Pair Programming. I've already done a little bit of this at work and for school projects, but I think the cross-training that it provides is really useful, and I'd love to try it for longer periods of time.
- Language Expertise. I still want to learn some language really really well. I think C# is a good candidate for this. It's still my favorite language.
- Technical Expertise. There's just so many technical things I don't know about. How do you do secure network communication? How can you ensure high availability? How can you efficiently do *? What standard libraries exist for doing *? I would like to know so much more about these topics.
- Agile. All my agile knowledge comes from many different informal sources. I think I should try to learn it more formally by reading through a book, or taking a course or something. There are a lot of fundamental things that I'm still trying to figure out, and I think that formal training would be very useful.
- Healthcare. There's so much to learn about the being a developer for the healthcare industry. There's various protocols: HL7, DICOM, and frameworks for working with them like XDS. I know very little about how these protocols work. I know even less about the interoperability problems that arise from having many different protocols.
This list is a little overwhelming. I don't know where to start. Instead of doing a "breadth-first search" into these topics like I have in the past, I'd like to dive into one of them and get to know them very very well. Too bad I'll have Real-Time next semester. :/ I guess I'll make time after that. :/
So here's a list of things I would like to work on; my personal improvement backlog. :P Priority to be determined.
- Test Driven Development. I feel like won't really understand it until I actually spend a few weeks doing that. I am still unconvinced of it's benefits, but I think the best way to really decide its effectiveness is to actually practice it.
- Pair Programming. I've already done a little bit of this at work and for school projects, but I think the cross-training that it provides is really useful, and I'd love to try it for longer periods of time.
- Language Expertise. I still want to learn some language really really well. I think C# is a good candidate for this. It's still my favorite language.
- Technical Expertise. There's just so many technical things I don't know about. How do you do secure network communication? How can you ensure high availability? How can you efficiently do *? What standard libraries exist for doing *? I would like to know so much more about these topics.
- Agile. All my agile knowledge comes from many different informal sources. I think I should try to learn it more formally by reading through a book, or taking a course or something. There are a lot of fundamental things that I'm still trying to figure out, and I think that formal training would be very useful.
- Healthcare. There's so much to learn about the being a developer for the healthcare industry. There's various protocols: HL7, DICOM, and frameworks for working with them like XDS. I know very little about how these protocols work. I know even less about the interoperability problems that arise from having many different protocols.
This list is a little overwhelming. I don't know where to start. Instead of doing a "breadth-first search" into these topics like I have in the past, I'd like to dive into one of them and get to know them very very well. Too bad I'll have Real-Time next semester. :/ I guess I'll make time after that. :/
Monday, March 28, 2011
C++0x: Dead Horse With Jetpacks
So you may have heard that the newest version of C++ (C++0x) has been finalized. I finally took a look over most of the changes they made. You can find a nice, comprehensive list here. God dammit. I can honestly say that I've never seen a language with syntax this ugly. It's much more complicated than it has to be, mostly because it has to retain backwards compatibility. They took an already complex language and added even more complexity to it. I think they should have just let C++ die, and waited for C# to take over. This standard is trying to desperately revive a dieing language by adding random features that other languages have had for years (decades in some cases!). It's like adding jetpacks to a dead horse and saying 'Eh. That should work'.
Should you use C++0x? Well, if for some reason you have to write C++, you should use the features introduced in 0x. Otherwise, I don't see why someone would choose C++0x over something like C# or Java. If you are writing something very low-level, object oriented C++ is the wrong language, and if you're writing almost anything higher level, C# and Java are much better choices. The only possible market for C++ I've ever seen is game design, where the language needs objects, as well as low level access for speed.
So, what do you think of C++0x?
Should you use C++0x? Well, if for some reason you have to write C++, you should use the features introduced in 0x. Otherwise, I don't see why someone would choose C++0x over something like C# or Java. If you are writing something very low-level, object oriented C++ is the wrong language, and if you're writing almost anything higher level, C# and Java are much better choices. The only possible market for C++ I've ever seen is game design, where the language needs objects, as well as low level access for speed.
So, what do you think of C++0x?
Sunday, March 27, 2011
Why Linux Won't Be A Popular OS Any Time Soon
Yesterday, I went out with a few friends for some billiards. Of course, since we're all nerds, we got to talking about open source software for 3 hours at Tim Hortons. This happens pretty much every time I'm in Toronto and hang out with these people. I love it. We play pool, drink coffee, and talk about technology. I enjoy having a group of friends that I can have these conversations with. :)
Anyway, yesterday we touched on the topic of Linux becoming a popular operating system for the masses. One friend thought that Linux was on it's way to become a popular OS for the average computer user. I disagree. Linux still has the image of an operating system made for experienced computer users. Until they break this image, they will never be accessible to the average user. For example, look at troubleshooting in Linux. Google any common problem you might have in Linux, and the first page of results will all be cryptic command line tricks to get something to work. The language used is often way over the head of a lot of computer users. The average computer user will be intimidated by this. Until Linux changes this culture of just encouraging people to use command line to solve any problem, they're never going to become popular with average users.
So is this command line culture ever going to change? Maybe. But not anytime soon. A lot of members in the Linux community probably don't consider making GUIs on top of a few simple commands. Even though this is exactly what it would take to get people to use Linux more. I would argue that this community is not thinking about usability as much as they should be.
What do you guys think? Do you see this as a problem for Linux? Do you see it being solve any time soon?
Anyway, yesterday we touched on the topic of Linux becoming a popular operating system for the masses. One friend thought that Linux was on it's way to become a popular OS for the average computer user. I disagree. Linux still has the image of an operating system made for experienced computer users. Until they break this image, they will never be accessible to the average user. For example, look at troubleshooting in Linux. Google any common problem you might have in Linux, and the first page of results will all be cryptic command line tricks to get something to work. The language used is often way over the head of a lot of computer users. The average computer user will be intimidated by this. Until Linux changes this culture of just encouraging people to use command line to solve any problem, they're never going to become popular with average users.
So is this command line culture ever going to change? Maybe. But not anytime soon. A lot of members in the Linux community probably don't consider making GUIs on top of a few simple commands. Even though this is exactly what it would take to get people to use Linux more. I would argue that this community is not thinking about usability as much as they should be.
What do you guys think? Do you see this as a problem for Linux? Do you see it being solve any time soon?
Wednesday, March 23, 2011
Should UI be required for a CS degree?
Yesterday, I went to a uxWaterloo event. There were a bunch of 7 minute talks on various topics in usability. One of the topics that caught my interest was the question if UI should be a required course for CS degrees?
Based on the discussion that followed the talk, the majority of people seemed to like the idea of including UI courses in CS degrees as a required component. After all, a lot of people with CS degrees go into the workforce as software developers working on user interfaces. However, CS isn't really about developing usable software. It's not really about developing software period. CS is more about the theoretical study of mathematical computation and information processing.
Software engineering is the program about developing software. I think that UI should definitely be a core part of a software engineering degree, but I don't think it's a good idea to included it as a core course in CS. CS is already this murky field that's half theoretical and half practical. I think a good solution is to make the theoretical courses required for CS degrees, but offer a wide variety of optional, practical courses. UI, of course, should be in that optional list of courses. Promoting these optional courses is also very important aspect of getting people more interested in UX.
So what do you guys think? Should UI be a core component of CS degrees?
Based on the discussion that followed the talk, the majority of people seemed to like the idea of including UI courses in CS degrees as a required component. After all, a lot of people with CS degrees go into the workforce as software developers working on user interfaces. However, CS isn't really about developing usable software. It's not really about developing software period. CS is more about the theoretical study of mathematical computation and information processing.
Software engineering is the program about developing software. I think that UI should definitely be a core part of a software engineering degree, but I don't think it's a good idea to included it as a core course in CS. CS is already this murky field that's half theoretical and half practical. I think a good solution is to make the theoretical courses required for CS degrees, but offer a wide variety of optional, practical courses. UI, of course, should be in that optional list of courses. Promoting these optional courses is also very important aspect of getting people more interested in UX.
So what do you guys think? Should UI be a core component of CS degrees?
Tuesday, March 22, 2011
Testing with Real Data
It's really important to test your applications with real data. I learned this when I worked in QA for Ramsoft Inc. I found it much easier to spot problems with the product when there were real data values in the application. Certainly you don't need real values to spot glaring errors like segfaults or something, but to find the more subtle bugs, having real data really helps. Some usability problems become very transparent when you use real data instead of "ksodaguhkudhgau".
When you use real data, you put yourself in your user's shoes. Having this perspective on the application really helps create a better application. You can solve a lot of usability issues by using real data during development and testing, instead of discovering them when you put the application in front of your users for the first time.
To this end, when I do user interface development at Karos Health, I try to use real data as much as possible. This helps me find ways to improve the UI to make it more usable, and helps me understand the high level purpose of the application better. It also makes for much better demos at the end of our sprints. :)
When you use real data, you put yourself in your user's shoes. Having this perspective on the application really helps create a better application. You can solve a lot of usability issues by using real data during development and testing, instead of discovering them when you put the application in front of your users for the first time.
To this end, when I do user interface development at Karos Health, I try to use real data as much as possible. This helps me find ways to improve the UI to make it more usable, and helps me understand the high level purpose of the application better. It also makes for much better demos at the end of our sprints. :)
Tuesday, March 15, 2011
<3 Software Engineering (and Mockito)
At work today, I took a lot of time to write unit tests properly. There was a lot of refactoring, dependency injection, and mocking. I was using Mockito as a mocking framework, and I am very impressed so far. The syntax is simply beautiful. Simple, to-the-point, elegant. Check it out:
Wow. This is so clean! Look at line 7. It reads like an English sentence. Or Ruby. Go figure. Certainly not like most noisy Java code.
Today I remembered why I really love software development. It's so cool to see such complicated modules of code working together in such elegant ways. Things like dependency injection and programming to modular interfaces all come together to let you do some really powerful things, like unit testing. I'm so impressed that you can take any random component in a huge, complex system, and be able to run it in isolation. The amount of software engineering it takes to get that to work in a huge system is quite impressive. It reminds me why I'm so fascinated by software engineering.
Wow. This is so clean! Look at line 7. It reads like an English sentence. Or Ruby. Go figure. Certainly not like most noisy Java code.
Today I remembered why I really love software development. It's so cool to see such complicated modules of code working together in such elegant ways. Things like dependency injection and programming to modular interfaces all come together to let you do some really powerful things, like unit testing. I'm so impressed that you can take any random component in a huge, complex system, and be able to run it in isolation. The amount of software engineering it takes to get that to work in a huge system is quite impressive. It reminds me why I'm so fascinated by software engineering.
Monday, March 14, 2011
Functional and Object Oriented Programming
In the past, I've talked about integrating functional and object oriented programming. I still feel like that's a great way to produce clean code.
This article addresses a few more aspect of functional and object oriented programing, including Bottom up vs Top down design.
Object Oriented programming often encourages thinking about code in a top down way. On the other hand, functional programs tend to be bottom up. I've found this to be pretty accurate in the past when I work with both paradigms.
One particularly interesting point made in the article is about reusability. Specifically, the article claims that bottom up code is inherently more generic because they are built before their exact uses are explicitly planned and encoded. I've never really thought about it like that, but it makes sense. In general, functional languages provide you with a lot of resources to create cleaner, more reusable code. First-class functions are probably the biggest convince they offer for creating clean code. Of course, reusability isn't always the best goal anyway. Reusable code just happens to also have a lot of other very desirable properties.
Subscribe to:
Posts (Atom)