Jump to content

Exceptional Betas - How is the program kept so stable?


alangiv

Recommended Posts

A big thanks for making this app! µTorrent is an example for software development in general -rarely do you see a beta version of any software that is stable enough for general use. Just look at Microsoft programs - they require several service packs before they can even be considered near production quality.

My question is related to the development - how is such a high level of stability achieved?

Link to comment
Share on other sites

If you like it, don't forget to donate! I didn't make a huge donation but I'll be making it annually for as long as µTorrent continues to be developed/supported. If everyone who used µTorrent would give a small amount, it would really add up and encourage continued development.

(I am not affiliated with µTorrent in any way other than being a user.)

Link to comment
Share on other sites

µTorrent is so stable because it isn't overcomplicated or bloated. luude adds features gradually, making sure they work before packing more in. And also, since he's only one person, he knows the entire code. A lot of Microsofts bug problems come from the number of people who work on something, with one persons code not jiving with another persons code.

Link to comment
Share on other sites

ludde probably uses a very consistant variable-naming scheme.

He probably "insulates" portions of the program from other parts in various ways...subroutines, dedicating memory only to parts, etc. That way, 1 part can locally crash without knocking out everything.

He has a good understanding for both the protocol and philosophy behind µTorrent.

Lastly, his custom libraries are probably kept extremely spartan to prevent weird interactions, extra outside influences, and oddball buffer overruns.

Link to comment
Share on other sites

lol.. I get a chuckle when I see people who aren't in the industry try to guess why something is stable. Some points (very few) are valid but most are myths and all the points do not guarantee stable code they just make sure you don't make it harder than it needs to be. There is only one point that matters which Vectorferret touched on and I'll repeat below.

The simple fact of the matter for those interested, which anyone who programs for a living will tell you, is because there is only one person writing all the code he knows exactly what needs to be changed and can easily guage the impact on the entire program a single change will have. Contrast this in a project which has many team members contributing individual sub modules with at best a loose understanding of how other models operate. Assumptions one team member has about how something interacts with their code may be completely different to the other persons who is actually interacting with it and that is where instability is usually born. Ideally design and documentation should minimise this problem but we are not in an ideal world and interfaces and what not in code in a team environment do change. Whenever possible the team working on a project should be minimised but there is the fact of trading off time as well that needs to be considered.

I'll go on to say that even a sub par programmer can produce a decent stable application if (s)he works on it alone. I'm saying this in general and not in any relation to luddes development abilities so please don't take it as such. Skill is defintely needed on the other hand when working in teams due to interworking challenges.

Read a few software management books and chase up terms such as mythical man month etc and you'll get the simple result that adding more people doesn't mean the software gets written linearly quicker with each man added nor is it as stable in the end.

PS:

The fact that ludde isn't using well debugged MFC libraries for GUI etc (as is my understanding, which can be statically linked so you don't need DLL's) for example which ensure the interface isn't buggy to me indicates he is taking more of a risk than he needs to in terms of stability. There may be benefits in his design that I am unaware of but one rule of programming is that you do not re-invent the wheel if you want stability.

Link to comment
Share on other sites

A big thanks for making this app! µTorrent is an example for software development in general -rarely do you see a beta version of any software that is stable enough for general use. Just look at Microsoft programs - they require several service packs before they can even be considered near production quality.

Anything MS release that requires service packs is a few orders of magnitude more complex due to it's functionality it provides the end user than anything utorrent could ever hope to achieve. With service packs you are talking about the operating system and apps like office and whatever else. Lets get realistic here.. All offer praise to ludde for his efforts but lets put things into perspective..lol. Nothing that MS release that requires service packs could ever be programmed by one guy from start to finish only in his entire life before the product would be well and truly obsolete. Increase number of developers -> Increase bug risk. Lets not ignore the fact that the more functionality a product offers and the larger it gets the more difficult it is to update and when you factor in that companies hire new developers all the time who don't know the history of the development history of the product but are expected to be productive from day one I'm sure you can work out why there may be a difference in stability.

I think that is one of the reasons ludde won't go open source because then he looses control and visibility of how everything works over time if he doens't keep on top of it. Given that all of a sudden quite a few developers would start developing new features it would be nearly impossible for him to do so. I'm not a real fan of open source for this very reason and the fact that you spend more time managing the commits and trying to monitor the quality of the source code coming in.

Link to comment
Share on other sites

Yea they need to have a larger team of developers, but don't they have managers that try to oversee the relations of all the individual modules? And then what about those 'quality assurance teams' or whatever that are supposed to make sure everything is doing what it's supposed to?

More developers = more risk, but if you have a company that manages everything well, you will get decent and stable products. There are plenty examples of this in the industry, even open source where the people may be furthermore 'random' you will have products that stand out.

Of course open source programmers may be developing because they love to do it and have more zeal for their work, so that may be it. (not saying that payed ones don't)

Link to comment
Share on other sites

@ReP0: Sometimes, existing wheels might not be as smooth as one might like, so if one can make a better wheel to fit his needs, he should. So was the case with ludde. He's not a newbie GUI-based application programmer (not accusing you of saying he was), so I'm pretty sure he knows what he's doing with it ;P

Sidenote: Heh these past few posts have been unusually cautious... "not saying this," "not accusing you of that," etc xP

Link to comment
Share on other sites

Yea they need to have a larger team of developers, but don't they have managers that try to oversee the relations of all the individual modules? And then what about those 'quality assurance teams' or whatever that are supposed to make sure everything is doing what it's supposed to?

More developers = more risk, but if you have a company that manages everything well, you will get decent and stable products. There are plenty examples of this in the industry, even open source where the people may be furthermore 'random' you will have products that stand out.

Managers are their to try to facilitate what will be required for developers to complete a project and act as external interfaces to other departments for the group. I've never seen a manager make sure all the interfaces in a project are compatible. Not may projects have a chief architect which is what you think a managers job is but isn't.

Quality of code is done by peer reviews of team members and depending on time constraints the quailty of such reviews can differ greatly.

Companies that are managed well make money and an *acceptable* product. In business first to market is key. Bill Gates DOS is a prime example of that lesson :). I've been involved in a number of projects that where superior in every way to competitors but we came out 6 months later and the market share was gone regardless the product that first came was still buggy and inferior. First to market rarely means that you have time to test/debug and quality assure everything to the max. You produce what the market will bear and improve from there. Locking in customers without pissing them off to the extreme is the key. Of course in there is no market pressure (competition) then there is no problem which is probably where you've seen some of the decent stable products arise from. Not that there aren't exceptions as with everything.

Now open source in many cases isn't in the market to make mega bucks to it's shareholders so first to market means very little giving them more flexibility.

Of course open source programmers may be developing because they love to do it and have more zeal for their work, so that may be it. (not saying that payed ones don't)

No, I don't think zeal or love comes into play. All developers are driven people, those who aren't don't last long in the industry. The problem is that open source (or anything that isn't commercially driven) is guided by developers (not marketing) with no real time constraints nor business plans they need to produce and execute on in a certain time frame for their stock holders/business owners. Politics is also a big deal in large companies like MS where managers will play off each other to better their positions internally at the cost of the product. Sounds counter intuitive but it's all in the internal structure of the departments which are not just focused on a single product. It happens in all large organizations. Engineers will always ask for more time to produce a quailty product and rarely will they be given 60% of the real time it takes to complete the product properly due to market pressures. As the project goes on and the times supplied by developers intially (which where ingored) reflect the real time the project should take managers will throw bodies to compensate but it rarely works out. In the end from a management view the project was delivered and the management team can wash there hands and say they did all they could. Any issues down the line fall on the team and the poor first line manager of the product who has very little power to begin with and gets screwed over with the developers.

If the developer isn't there at the beginning of the project (s)he is a liability to me from that point on as my experience has shown. I tend to let the latecomers write unit tests and bugfixing so they get to know the code rather than implement the core of any code because it minimises risk. I don't believe there are such issues in open source where it's more of a hobby and if you don't release it this month then next month is ok too.

Merged Post(s):

@ReP0: Sometimes, existing wheels might not be as smooth as one might like, so if one can make a better wheel to fit his needs, he should. So was the case with ludde. He's not a newbie GUI-based application programmer (not accusing you of saying he was), so I'm pretty sure he knows what he's doing with it ;P

Sidenote: Heh these past few posts have been unusually cautious... "not saying this," "not accusing you of that," etc xP

Well I'll disagree. I've used MFC quite a bit and it's been around for a while now so it's stable and quite easy to program with in comparison to raw win32 api's. I think it's more a point that he may be more comfortable with the raw win32 api because of experience he had starting out with it from the beginning of windows programming rather than having to potentially learn MFC which he may not be as familiar with. There is nothing rough about MFC these days, in fact win32 api raw is considered rough imho in comparison. I'd be curious for his true reasoning behind not using MFC or other equivalent frameworks. It may have added a few kilobytes to the final file size but nothing immensily significant for the simplicity it offers in comparison.

Link to comment
Share on other sites

Like anything in programming choosing the right tools/algorithms etc is an important decision to make. MFC is not as bad as people make out and usually the ones that complain do not understand how to use MFC correctly or don't like the pattern the MFC guys implemented and which needs to be followed to do a certain thing within their framework. There are many programers who cannot adapt to something different than what they are used too (a flaw in character imho) and bitch and moan when they can't do it any which way. For those types of people the raw win api is the way to go but the tradeoff is time and having to handle nitty gritty things which leads to that time use. It can also lead to bugs if you are inexperienced because you need to handle alot more yourself. Of course using raw api is also beneficial because it lets you construct custom widgets which may not be as easy to accomplish in MFC in all cases and if done properly will give you a good grounding of how windows works. Nothing is stopping you of course in mixing MFC calls with raw api's either in such a case.

I see this debate in the same light as the assembler vs high level language debate. Most people in the early days of assembler put down high level languages when they came out due to various reasons (pretty much the same reasons we hear today from MFC critics). In the end the the simpler, quicker, clearer structure of high level languages with common functions in the forms of libraries whereby users didn't have to re-invent the wheel every time won out and we'll see the same for GUI programming as time moves on and .net comes more into play. I'm not saying MFC is the holy grail but I find it to be alot better than raw calls and things will get better as the libraries/frameworks evolve even more.

I won't deny MFC is dangerous if you don't take the time to undesrtand how to use the framework (as anything is in programming if you don't understand it properly) but once you do it can immensley speed up your development times compared to using raw win api's. My first foray into MFC was a bit of a disaster I'll admit but that was my fault for not understanding it. Thats why good developers do mini projects when we are in unfamiliar territory so we can learn from our mistakes and not blame the language/framework (unless it really is warranted) when we use it on serious projects. The trade off is that you need to stick to their way of doing things which not all people like. The only true valid argument I see is that is may add some bloat ( a couple 100k's at most).

At the end of the day you use what best suit your needs and goals, there isn't the perfect solution typically. People hanging onto MFC and bitching about it are the same types of people usually who had a bad experience in Win3.1 and refuse to re-evaluate todays product to see what it has to offer and blindy recommend a MAC solely because it isn't Windows :). You need to evaluate what the product is today and not what it was years ago and it needs to be a constant revision of it.

Anyway we've gone way off topic so that all I'm going to say on it. In the meantime I need to do a mini project in .net next to see how that is going ..:)

Link to comment
Share on other sites

But it's not "little"... it takes a lot more CPU, has an awful startup time, and rapes resources. All that just to avoid making a half-decent GUI yourself?

Actually you misunderstand the point of .net. It is not designed to be mean/lean and a language to only create a half-decent GUI. .Net has the same goal as Java and more. It is meant to save business the task of rewriting codebases (in a number of languages existing today not just one like Java) to work on multiple CPU's/OS's and that is why you use it. To achieve this there is a trade off. If has nothing to do with avoiding anything other than the time invested to get your app across many platforms. Azureus can run under a number of environments where utorrent atm only runs on windows. Like I've said before you chose your tools to achieve your goals, in Azureus's case the goal was to be portable with some sacrifices in resources while utorrent was to have a small footprint with a minimalistic set of features and run under windows (and linux I think but it would require some special handling of GUI meaning you are no longer maintaining one codebase across multiple OS's).

Java is a dead end to a point. The developers of the language (Java) have come out and admitted that the language itself has become a mess and some problems from compiling it cannot be resolved with ease if at all. I don't know the specifics but it has some major flaws due to it's poor evolution management over the years. I avoided Java in my career just because I wasn't in the right market though I won't bag it because I can see it's advantages and it's usefulness. The question is whether .net can take advantage of it and do the right thing and avoid Sun's mistakes. Time will tell. (I'm not convinced yet)

As for raping resources it's not as bad as some people go on about. It has alot to do with how coders implement code as well. With virtual memory and the amount of physical memory available in average computers most developers don't see the return on optimising memory usage and increasing complexity and I agree with this thinking. As for speed issues JIT compilation solves most of that. Any slowness is felt when the piece of code is first run with subsequent uses of that code running speedily to the point where you wouldn't tell whether it was Java or not in 90% of applications. If Java was that slow people would never have threads going on about how Azureus was able to achieve X when torrenting and why is <insert alternative> not achieving this. As for slow startup time well really who cares. An extra couple of seconds here or there and from that point on there is no problem. I think you have other problems if you have a fetish for opening and closing applications constantly without letting the app do what it was intended to do.

I'm sorry but with the price of memory today I think people are whinging for the sake of it. For gods sake just go out and buy that extra gig for a few bucks already. If you can afford internet access you can afford that extra gig and you will never look back. I personally have 2Gig and never looked back. It's not like you are only buying it to run your torrenting client. Your system will benefit all round especially if you are so concerned that an additional 60M of memory usage for a torrent client grinds your system to a halt. Arguing that you shouldn't have to upgrade memory is like arguing that you should never have had to update that x286 your dad bought you when you where a kid. Times move on, somehow we all upgrade CPU's, graphics cards but upgrading to a Gig or so of RAM is unacceptable in todays environment. I'll never understand it..

The only valid comment I have when it comes to Azureus is that its GUI is a bit laggy but I'm not sure it's Java's fault totally. I've run other Java apps which don't exhibit this lag.

Link to comment
Share on other sites

The simple fact of the matter for those interested, which anyone who programs for a living will tell you, is because there is only one person writing all the code he knows exactly what needs to be changed and can easily guage the impact on the entire program a single change will have. Contrast this in a project which has many team members contributing individual sub modules with at best a loose understanding of how other models operate. Assumptions one team member has about how something interacts with their code may be completely different to the other persons who is actually interacting with it and that is where instability is usually born. Ideally design and documentation should minimise this problem but we are not in an ideal world and interfaces and what not in code in a team environment do change. Whenever possible the team working on a project should be minimised but there is the fact of trading off time as well that needs to be considered.

I'll go on to say that even a sub par programmer can produce a decent stable application if (s)he works on it alone. I'm saying this in general and not in any relation to luddes development abilities so please don't take it as such. Skill is defintely needed on the other hand when working in teams due to interworking challenges.

Definitely correct. A one man team has a better chance of making a program stable. What I would like to know is what team development method has the best chance of keeping things stable.

µTorrent is stable now, but I am sure it will become more of a challenge as code size and complexity increase. It will be interesting to see whether ludde has the energy to deal with µTorrent 2 years from now. I think it would an idea to slowly convert µTorrent to open-source before it becomes to big to handle. Either way, ludde has clearly proven himself as a skilled and dedicated software engineer. I just hope he will be able to stick with the project.

Blender (www.blender3d.org) seem to have managed multi-dev with a (very) large project - but they wasted approximately a year just to understand what was going on with the previous pre-open source code. There are many open-source projects which succeed in this area, and what seems to be twice as many that fail.

About Microsoft: I think that until recently they simply did not care about software quality too much. If it worked reasonably well, and brought home the bacon, it was ok. Microsoft spends more on marketing than development, so I think that says something about their direction.

There is an old saying: "Too many chefs spoil the broth" which I think applies well to any project with many team members. Unless, and I stress "unless", there is proper management applied.

Link to comment
Share on other sites

If Java is so fast, then I'd like to know why almost everything done in Azureus consistently takes up way way more CPU than any other client. Nevermind the fact that loading a large torrent takes up 100% CPU for a good MINUTE or two, and can even crash with out of memory errors! That is unacceptable.

Link to comment
Share on other sites

@shkbobo: It's also horribly slow and bloated on linux and all the BSDs that I've been silly enough to try using it on.

@ReP0: There is no excuse for making something that uses a hundred and fifty megs at runtime when it should be using less than ten, period.

The one thing java is good at is being a proof of concept. It shows that the underlying idea of truly cross-platform code is possible. That's a reason to give it, and the subsequent platform-neutral languages, a lookover; kick the tires so to speak. That's not a reason to actually use it on something people are going to run regularly. And yeah, that goes for OOo too.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...