I'm a big believer in self documenting code. That is, code that is structured to be readable without comments. There are a lot of problems with comments. First, they are notorious for getting out of date. If you've ever been bitten by a misleading comment, you will know that no comment is much better than a false one. I see most comments as crutches. You have this bad code, and you try to "fix" it by just adding comments to the code, since that's the easiest way to make the whole package somehow understandable. Unfortunately, at the end of the day, the code is still awful. In this way, comments are the lazy way to make code readable. In fact, most of the time I treat comments as a potential code smell. It is almost always better to refactor the code to be more clear, instead of annotating the code.
I've heard other developers say things like self-documenting code is a lazy excuse for not adding comments. I disagree. Writing self-documenting code is orders of magnitudes harder than writing descriptive comments. It also requires a lot more time and effort than just commenting your code. However, it is also much more effective at making code readable. When your code only makes sense in the presence of comments, you are making that code much harder to use in other areas. Are you going to include the comments wherever the bad code is used? Copy-pasta?
There are, however, a few cases where comments are the way to go. They are much easier and quicker to write than actually refactoring the code. This makes them preferable when you have to write code under a very tight deadline. However, I would treat them like any other "hack" developers do in the heat of a release; do it now, and fix it as soon as possible when the deadlines loosen up.
There are also some times where refactoring the code will lead to a lot more code for little readability benefit. In these cases, a comment might be a better solution. Having too much code, however clean, is also a very big problem, because it makes the overall project harder to understand. However, to me this seems like a rare case. It is almost always better to refactor than to add a comment.
As an exercise, take a look at some old code you wrote and find the lines of code with comments. Can you think of a way to refactor it to be cleaner? I think in 90% of those cases, you will be able to refactor the code to make it much more readable without comments.
Monday, June 27, 2011
Commenting: The Lazy Way Out
Friday, June 24, 2011
Why Agile Development is More Fun
I just read this article claiming that Agile is "boring". I'm not sure how this person got to that conclusion. He also claims that Agile is very rigid and strict, although it's probably one of the most relaxed project management methodologies out there. It's certainly more dynamic and flexible than Waterfall models are.
From the article, it seems that this person works somewhere where they don't have any concept of project management at all. He talks as if he doesn't have deadlines to meet for his organization. Not sure where he's working where he can get away with this. Almost all projects have deadlines. It's very useful for business people to know things like estimates and set deadlines. Pretending they don't exist is no way to professionally develop software. Certainly not a realistic way to grow as an organization.
The writer says that Agile development gets boring after you do it for a couple projects. Not sure where that's coming from. I find that Agile development environments are much more interesting, because there is much less repetition. From iteration to iteration, you could be working on very different projects. Agile allows (and even encourages!) developers to explore other areas of the software and cross-train. You are also much less likely to be pegged as the "Database guy" or "UI guy" in an Agile project. While you might have a lot of experience with UI, your task is really whatever the project needs. If that means moving outside of your domain, so be it.
When I worked at Karos Health we practiced Scrum, a form of Agile, and I found it to be very flexible. While most of the time I was developing UI code, I also participated in all the other sections of the applications. I got to see all the parts of the application.
Also, Agile teams are encouraged to work very closely together. This interaction creates a very interesting working environment where you are constantly learning. This is certainly more interesting than working your way down an ad-hoc todo list by yourself, conversing with other developers only when absolutely necessary.
I suspect that the author has never worked in an Agile company (or at least one that's practicing Agile correctly), because his comments seem to be the opposite of what Agile development encourages.
From the article, it seems that this person works somewhere where they don't have any concept of project management at all. He talks as if he doesn't have deadlines to meet for his organization. Not sure where he's working where he can get away with this. Almost all projects have deadlines. It's very useful for business people to know things like estimates and set deadlines. Pretending they don't exist is no way to professionally develop software. Certainly not a realistic way to grow as an organization.
The writer says that Agile development gets boring after you do it for a couple projects. Not sure where that's coming from. I find that Agile development environments are much more interesting, because there is much less repetition. From iteration to iteration, you could be working on very different projects. Agile allows (and even encourages!) developers to explore other areas of the software and cross-train. You are also much less likely to be pegged as the "Database guy" or "UI guy" in an Agile project. While you might have a lot of experience with UI, your task is really whatever the project needs. If that means moving outside of your domain, so be it.
When I worked at Karos Health we practiced Scrum, a form of Agile, and I found it to be very flexible. While most of the time I was developing UI code, I also participated in all the other sections of the applications. I got to see all the parts of the application.
Also, Agile teams are encouraged to work very closely together. This interaction creates a very interesting working environment where you are constantly learning. This is certainly more interesting than working your way down an ad-hoc todo list by yourself, conversing with other developers only when absolutely necessary.
I suspect that the author has never worked in an Agile company (or at least one that's practicing Agile correctly), because his comments seem to be the opposite of what Agile development encourages.
Monday, June 20, 2011
Improving Performance
Coding Horror updated today, talking about the importance of performance for web applications. He cites some studies that show significant drops in website usage as the speed slows down. While this is definitely important, you have to be very careful to not get carried away. Unless you're Google, you probably don't need to shave off a few milliseconds off your page load times. My basic rule of thumb is to only optimize things if it will make a noticeable difference to the performance of your applications. Humans can't detect differences of a few milliseconds.
Of course, there are always times when you do need to optimize for performance. Doing this in a smart way can save a lot of developer time. Apparently, Yahoo has a well-cited set of tips for improving site performance. Some of these are really easy to do, and have a major impact on load times. Minimizing HTTP requests is a big one. It's fairly easy merge all you JS files into one, optimized file. There are plenty of tools that do this for you automatically (like Closure Tools).
80-90% of the user's time is spent downloading "stuff" to the client. Minimizing the "stuff" is a very powerful way to improve response time. For example, take a look at http://www.google.ca/. Look at the source code. You won't see "wasteful" things like spaces and linebreaks! Granted, this is a pretty extreme example and Google probably needs it to be this optimized.
You can get another huge performance boost by using your cache better. HTTP has built in cache using Conditional Gets, but it requires websites to set response headers intelligently. Using a longer expire time can notably improve performance. In general, caching is perhaps the biggest thing keeping things running fast. If your computer didn't have a cache it would barely be able to function. If our DNS name servers didn't cache anything, the internet would crawl.
Another simple thing you can change is the order of your information on web pages. You want to put your CSS files at the very top of the HTML file. Once your browser gets this data, it can start progressively rendering the page. You also want all the scripts at the bottom, because they take a (relatively) long time to download, and might block concurrent downloads of other things.
A very powerful tool here is your profiler. Some browsers (like Chrome) have this built in. A profiler can tell you exactly where the bottlenecks in your system are. You should never optimize something before consulting your profiler. Often, you might find that the thing you were going to optimize is negligible compared to something else.
I thought these were some interesting things to know in the few cases where you need to spend time optimizing for performance.
Of course, there are always times when you do need to optimize for performance. Doing this in a smart way can save a lot of developer time. Apparently, Yahoo has a well-cited set of tips for improving site performance. Some of these are really easy to do, and have a major impact on load times. Minimizing HTTP requests is a big one. It's fairly easy merge all you JS files into one, optimized file. There are plenty of tools that do this for you automatically (like Closure Tools).
80-90% of the user's time is spent downloading "stuff" to the client. Minimizing the "stuff" is a very powerful way to improve response time. For example, take a look at http://www.google.ca/. Look at the source code. You won't see "wasteful" things like spaces and linebreaks! Granted, this is a pretty extreme example and Google probably needs it to be this optimized.
You can get another huge performance boost by using your cache better. HTTP has built in cache using Conditional Gets, but it requires websites to set response headers intelligently. Using a longer expire time can notably improve performance. In general, caching is perhaps the biggest thing keeping things running fast. If your computer didn't have a cache it would barely be able to function. If our DNS name servers didn't cache anything, the internet would crawl.
Another simple thing you can change is the order of your information on web pages. You want to put your CSS files at the very top of the HTML file. Once your browser gets this data, it can start progressively rendering the page. You also want all the scripts at the bottom, because they take a (relatively) long time to download, and might block concurrent downloads of other things.
A very powerful tool here is your profiler. Some browsers (like Chrome) have this built in. A profiler can tell you exactly where the bottlenecks in your system are. You should never optimize something before consulting your profiler. Often, you might find that the thing you were going to optimize is negligible compared to something else.
I thought these were some interesting things to know in the few cases where you need to spend time optimizing for performance.
Saturday, June 18, 2011
TDD and YAGNI
Here's an interesting read. It talks about how if you practice Test Driven Development (TDD) in an Agile environment, there is a lot of pressure to adhere to YAGNI. YAGNI stands for "you ain't gonna need it". Developers often try to predict what features the code might need in the future, and try to build it in right away.
The problem is that most of the time, you won't actually need that feature that you predicted you would. To avoid this problem, supporters of YAGNI try to write the smallest amount of code possible to accomplish something. TDD has a similar belief in that they stress writing the smallest amount of code possible to satisfy a test.
This seems like an extreme to me. While I agree that future proofing every part of your application is a silly idea, I don't think that this sort of TDD and YAGNI is very scalable. It's not really about doing the simplest thing possible, it's about the simplest thing possible that doesn't code you into a corner. You want to make reasonable assumptions about what will change, and future proof that. It's important to be pretty conservative with your guesses here, though.
A good developer will be able to draw on past experiences to predict that some things will happen. If they think it's easier to build it in now, they should. I would trust the judgement of an experienced developer over a best practice.
I feel like YAGNI is overcompensating for developers who apply design patterns to everything ever because they learned to do that in school. This is obviously the other extreme.
It's like software development practices are following a pendulum, going for extreme(Waterfall, design patterns everywhere) to extreme(Extreme Programming, YANGI). It's still a very volatile field because it's so young. Hopefully the industry will settle down somewhere in the middle.
The problem is that most of the time, you won't actually need that feature that you predicted you would. To avoid this problem, supporters of YAGNI try to write the smallest amount of code possible to accomplish something. TDD has a similar belief in that they stress writing the smallest amount of code possible to satisfy a test.
This seems like an extreme to me. While I agree that future proofing every part of your application is a silly idea, I don't think that this sort of TDD and YAGNI is very scalable. It's not really about doing the simplest thing possible, it's about the simplest thing possible that doesn't code you into a corner. You want to make reasonable assumptions about what will change, and future proof that. It's important to be pretty conservative with your guesses here, though.
A good developer will be able to draw on past experiences to predict that some things will happen. If they think it's easier to build it in now, they should. I would trust the judgement of an experienced developer over a best practice.
I feel like YAGNI is overcompensating for developers who apply design patterns to everything ever because they learned to do that in school. This is obviously the other extreme.
It's like software development practices are following a pendulum, going for extreme(Waterfall, design patterns everywhere) to extreme(Extreme Programming, YANGI). It's still a very volatile field because it's so young. Hopefully the industry will settle down somewhere in the middle.
Labels:
Agile,
Software Engineering,
Test Driven Development
Thursday, June 16, 2011
Ignite Waterloo
Yesterday, I got a chance to go to Ignite Waterloo 6, a community event where people give 5 minute talks on topics they're passionate about.
As always, it was an excellent event. It's fun to see some of the same faces at all the community events in Waterloo.
There were some notable talks. Cate Huston gave a talk entitled "Why Do Programmers Have to Lie to Get Dates?", where she claimed software developers have a communication problem that we need to address. There is a lot of confusion about what software developers actually do, and it leads to some interesting questions like "So you work for the internet?". If we figure out how to communicate better, not only will be create better software, but people will understand what we actually do. :P
Syd Bolton talked about cool uses of old computers. He is one of the founders of The Personal Computer Museum.
Bob Rushby, ex-CTO of Christie Digital, gave a talk on the how the future will be full of pixels. He imagined a world where everything analog will be replaced with something digital. Cool stuff.
Ben Brown gave an interesting talk about getting rid of all road signs. Apparently some places in Europe have done this with great success. I'm not convinced that this would work in Canada, especially in larger cities.
Steven Scott debunked the "I've got nothing to hide" argument on Privacy. Also an interesting discussion.
To finish the night, my good friend Amal Isaac gave a great talk on The Technological Singularity. He talked about an interesting future when, inevitably, computers surpass our intelligence.
All-in-all, it was another great event in the KW community! :)
As always, it was an excellent event. It's fun to see some of the same faces at all the community events in Waterloo.
There were some notable talks. Cate Huston gave a talk entitled "Why Do Programmers Have to Lie to Get Dates?", where she claimed software developers have a communication problem that we need to address. There is a lot of confusion about what software developers actually do, and it leads to some interesting questions like "So you work for the internet?". If we figure out how to communicate better, not only will be create better software, but people will understand what we actually do. :P
Syd Bolton talked about cool uses of old computers. He is one of the founders of The Personal Computer Museum.
Bob Rushby, ex-CTO of Christie Digital, gave a talk on the how the future will be full of pixels. He imagined a world where everything analog will be replaced with something digital. Cool stuff.
Ben Brown gave an interesting talk about getting rid of all road signs. Apparently some places in Europe have done this with great success. I'm not convinced that this would work in Canada, especially in larger cities.
Steven Scott debunked the "I've got nothing to hide" argument on Privacy. Also an interesting discussion.
To finish the night, my good friend Amal Isaac gave a great talk on The Technological Singularity. He talked about an interesting future when, inevitably, computers surpass our intelligence.
All-in-all, it was another great event in the KW community! :)
Tuesday, June 14, 2011
Resumes for Programmers
How useful are resumes for programmers? I've read a few articles now (including this one entitled "Programmer Resumes Are Deprecated") that claim employers are much more interested in artifacts and evidence of your programming. Things like github accounts, personal projects, and development blogs.
I think part of the problem is when someone writes "Experienced with C#" on a resume, employers don't really know what that means. Without hard evidence to back you up, it's hard for employers to believe you. Perhaps more importantly, these skill levels are relative. I might think that I know C++ really well, when in reality, I only know a small part of the language well. I think these differences in perception are a pretty big problem in hiring developers.
Some skills are also really hard to "prove" on a resume. Sure if you put "Proficient in C#" and then list a bunch of jobs where you used C#, they are more likely to believe you, but how do you prove good object oriented design skills? Or knowledge of the SDLC? Or processes like Scrum? You could try to force some sentences about all these skills, but it will make your resume really long, and you'd still have to worry about the problem of what does proficient really mean?
A better solution might be to have a bunch of links to things like blogs and personal projects in your resume. This way when you say "Experienced with C#", your employer can check out what your "experienced C#" code actually looks like. Then they can make their own decision on your skill level, instead of trusting that what you mean by "experienced" is the same as what they mean.
I don't think we should just sack resumes all together, since I think it's a good way to summarize your skills for someone without a lot of time. However, I think the hiring decision should focus more on tangible projects that employers can see for themselves.
I think part of the problem is when someone writes "Experienced with C#" on a resume, employers don't really know what that means. Without hard evidence to back you up, it's hard for employers to believe you. Perhaps more importantly, these skill levels are relative. I might think that I know C++ really well, when in reality, I only know a small part of the language well. I think these differences in perception are a pretty big problem in hiring developers.
Some skills are also really hard to "prove" on a resume. Sure if you put "Proficient in C#" and then list a bunch of jobs where you used C#, they are more likely to believe you, but how do you prove good object oriented design skills? Or knowledge of the SDLC? Or processes like Scrum? You could try to force some sentences about all these skills, but it will make your resume really long, and you'd still have to worry about the problem of what does proficient really mean?
A better solution might be to have a bunch of links to things like blogs and personal projects in your resume. This way when you say "Experienced with C#", your employer can check out what your "experienced C#" code actually looks like. Then they can make their own decision on your skill level, instead of trusting that what you mean by "experienced" is the same as what they mean.
I don't think we should just sack resumes all together, since I think it's a good way to summarize your skills for someone without a lot of time. However, I think the hiring decision should focus more on tangible projects that employers can see for themselves.
Sunday, June 12, 2011
More on Mixed Reality Interfaces
Here's a few videos showing how the Mixed Reality Interfaces (MRI) work. As I mentioned before, our REAP team this term is exploring interesting uses for this technology.
This is the best video, I think:
There's a few more here and here.
Pretty cool stuff. Our REAP team is currently looking into getting a sample museum exhibit built using this technology, so we can demo it to some real users and see what they think.
In other news, rankings come out on Friday! Yay! I have 6 interviews before then, so it'll be a busy week. Other notable things next week include Ignite Waterloo (so stoked!) and an interesting sounding talk at uxWaterloo. There might be a midterm somewhere in there too, but that's considerably less interesting. :P
This is the best video, I think:
There's a few more here and here.
Pretty cool stuff. Our REAP team is currently looking into getting a sample museum exhibit built using this technology, so we can demo it to some real users and see what they think.
In other news, rankings come out on Friday! Yay! I have 6 interviews before then, so it'll be a busy week. Other notable things next week include Ignite Waterloo (so stoked!) and an interesting sounding talk at uxWaterloo. There might be a midterm somewhere in there too, but that's considerably less interesting. :P
Wednesday, June 8, 2011
Common Interview Question: Abstract Classes vs. Interfaces
I had an interview today where, yet again, I got asked the difference between an abstract class and an interface. In fact, I would estimate about 50% of the job interviews I've had for Java development have asked this exact question.
The answer is pretty straight forward. An interface defines some behaviour that can be added to an existing class. The class can choose how to implement that behaviour, but by implementing the interface, they are saying that they have some capability.
And abstract class isn't used to add capabilities to an existing class. Instead, it's meant to be a basis for future classes. Abstract classes can also do some things that interfaces can't, specifically have state and default method implementations.
If you have a job interview for a Java developer position, I recommend that you know how to answer this question.
The answer is pretty straight forward. An interface defines some behaviour that can be added to an existing class. The class can choose how to implement that behaviour, but by implementing the interface, they are saying that they have some capability.
And abstract class isn't used to add capabilities to an existing class. Instead, it's meant to be a basis for future classes. Abstract classes can also do some things that interfaces can't, specifically have state and default method implementations.
If you have a job interview for a Java developer position, I recommend that you know how to answer this question.
Ugly Concept Cars
Why is it that most concept cars that I see look just awful. Like this:
Or this
Or most of these!
They even managed to make an Aston Martin look bad! :(
Lamborghini Ankonian Concept
Or this
Pontiac Solstice Concept
Or most of these!
They even managed to make an Aston Martin look bad! :(
Aston Martin one-77
Saturday, June 4, 2011
REAP Projects
I thought that I'd update on what we are doing for the REAP project this term. We are working with Mixed Reality Interface (MRI) technology. Check out the videos (entitled MRI - demo) on their site for a quick demo of it's capabilities. It is essentially an interactive table that connects to an external display. Actions on this table are reflected on the external display. You can imagine having a "character" (think: lego man) moving around the surface of the table. In this context, the character's point of view would be displayed on the external monitor. If the table displays a floor plan, you can imagine the external monitor showing the point of view of the lego man in the room. Turning the the character on the table is equivalent to looking around the "room".
The table offers some other nice features. The table reads "barcodes" (really, just pieces of paper with patterns) placed on the table and then takes some action in the table or external monitor view. This lets us dynamically change what's happening on either the table or the display.
Our REAP project involves trying to find uses for this technology. We are mostly approaching this from a techonlogy-first design point of view.
The first business case we are exploring is home design. Imagine being to visualize and experience a 3D view of your house before it's even built. The barcode system allows for very quick customization, so it can really help home buyers/designers visualize various design combinations. For example, if you wanted to see what marble cabinets with a red paint room look like, you would simply throw those two barcodes on the table, and would be able to actually visualize how it looks from different points of view. Currently, home buyers have to do all this combination visualizing in their head. Needless to say, this is much less effective (especially if you're someone like me :S).
The second case we are pursuing is virtual museum exhibits. Imagine modeling an entire roman city and letting people walk through it and explore it any way they wanted. There would be various points in the virtual world where information could be displayed. Better yet, one could imagine rendering animations and movies, instead of just a static world. With a system like that, you could watch two dinosaurs fight, and then choose where to go next in the virtual world. How about making a shared world between many table? That way a whole class could be experiencing the same world in their own way, almost like an MMO game. We even played around with making your own exhibits by placing figures (with the barcodes on the bottom) on the table. The system would then interpret the objects on the table and make them interact.
In the future, we might want to switch up that external monitor for something like 3D cave technology. This would let us project a 3D world around the users to create an even more immersion experience. For now we are focusing on starting small though.
These are just some of the fields we are looking at right now. Technology like this is fairly general, so we can really apply this to literally almost every field. For that reason, we decided to pick a few and run with them. If we talk to the business users and find that they don't think it's useful, we can just move on to one of the other umpteen ideas we have. It's a fun way to work.
The people on the REAP project are all very cool (and talented!) people, so it's a lot of fun to work with them. There's also a bunch of free training (Agile training, presentation training, etc). All-in-all, it's a pretty great part time job. :P
The table offers some other nice features. The table reads "barcodes" (really, just pieces of paper with patterns) placed on the table and then takes some action in the table or external monitor view. This lets us dynamically change what's happening on either the table or the display.
Our REAP project involves trying to find uses for this technology. We are mostly approaching this from a techonlogy-first design point of view.
The first business case we are exploring is home design. Imagine being to visualize and experience a 3D view of your house before it's even built. The barcode system allows for very quick customization, so it can really help home buyers/designers visualize various design combinations. For example, if you wanted to see what marble cabinets with a red paint room look like, you would simply throw those two barcodes on the table, and would be able to actually visualize how it looks from different points of view. Currently, home buyers have to do all this combination visualizing in their head. Needless to say, this is much less effective (especially if you're someone like me :S).
The second case we are pursuing is virtual museum exhibits. Imagine modeling an entire roman city and letting people walk through it and explore it any way they wanted. There would be various points in the virtual world where information could be displayed. Better yet, one could imagine rendering animations and movies, instead of just a static world. With a system like that, you could watch two dinosaurs fight, and then choose where to go next in the virtual world. How about making a shared world between many table? That way a whole class could be experiencing the same world in their own way, almost like an MMO game. We even played around with making your own exhibits by placing figures (with the barcodes on the bottom) on the table. The system would then interpret the objects on the table and make them interact.
In the future, we might want to switch up that external monitor for something like 3D cave technology. This would let us project a 3D world around the users to create an even more immersion experience. For now we are focusing on starting small though.
These are just some of the fields we are looking at right now. Technology like this is fairly general, so we can really apply this to literally almost every field. For that reason, we decided to pick a few and run with them. If we talk to the business users and find that they don't think it's useful, we can just move on to one of the other umpteen ideas we have. It's a fun way to work.
The people on the REAP project are all very cool (and talented!) people, so it's a lot of fun to work with them. There's also a bunch of free training (Agile training, presentation training, etc). All-in-all, it's a pretty great part time job. :P
Friday, June 3, 2011
Sony Hacked Again
So it looks like Sony was hacked again. Things are not looking good for Sony. It's been almost two months since the original hack in April. Why does Sony still have unencrypted databases? Didn't they hire a bunch of security consultants after that first security compromise? I would imagine that "Encrypt your freakin' data" wouldn't have been one of the first things that these security experts would have said. So then why is this still a problem?
One of my friends thinks its a size problem. Sony has a lot of systems to fix, and the hackers are working faster than Sony developers. I'm told that things work very slowly in huge companies. While this might be part of the problem, I feel like there must be something else at play here. Sure protecting against SQL injections is hard (ish), but hashing data shouldn't be that bad. Perhaps there code is poorly written, and adding in data encryption is very hard to do. In any good system, there should just be one layer talking to the data directly. In such a system, making this change wouldn't be that hard. They would also have to hash all the existing data, but that is also easy script work.
Maybe it's an IT infrastructure problem. That is, encrypting data makes things much slower (conceivably twice as slow for data access), and maybe the Sony servers can't handle that extra load.
I also think about why this was such a huge security hole in the first place. Is it really because Sony doesn't have any security-conscious developers? I doubt it. It's a pretty popular subtopic in Software Engineering, so I'm sure someone on their payroll took the time to learn about it. It's not like your need a Masters in Security to know to encrypt sensitive data.
I think the developers were just lazy. It's certainly easier to develop and test a system without encryption. It was probably put on some todo list for later, but that later never came. The development team was probably more interested in starting on new projects or features, and they assumed no-one was really trying to break their system anyway. Maybe the developers wanted to implement these security features, but management didn't think it was worth the time and money.
I would really like to know how this all happened, but I don't think Sony will ever reveal the real reasons. I do know that it must be..."fun"...to be working at Sony right now.
One of my friends thinks its a size problem. Sony has a lot of systems to fix, and the hackers are working faster than Sony developers. I'm told that things work very slowly in huge companies. While this might be part of the problem, I feel like there must be something else at play here. Sure protecting against SQL injections is hard (ish), but hashing data shouldn't be that bad. Perhaps there code is poorly written, and adding in data encryption is very hard to do. In any good system, there should just be one layer talking to the data directly. In such a system, making this change wouldn't be that hard. They would also have to hash all the existing data, but that is also easy script work.
Maybe it's an IT infrastructure problem. That is, encrypting data makes things much slower (conceivably twice as slow for data access), and maybe the Sony servers can't handle that extra load.
I also think about why this was such a huge security hole in the first place. Is it really because Sony doesn't have any security-conscious developers? I doubt it. It's a pretty popular subtopic in Software Engineering, so I'm sure someone on their payroll took the time to learn about it. It's not like your need a Masters in Security to know to encrypt sensitive data.
I think the developers were just lazy. It's certainly easier to develop and test a system without encryption. It was probably put on some todo list for later, but that later never came. The development team was probably more interested in starting on new projects or features, and they assumed no-one was really trying to break their system anyway. Maybe the developers wanted to implement these security features, but management didn't think it was worth the time and money.
I would really like to know how this all happened, but I don't think Sony will ever reveal the real reasons. I do know that it must be..."fun"...to be working at Sony right now.
Subscribe to:
Posts (Atom)