by Martin Michlmayr, University of Cambridge
Quality is a hard thing to define, because while most people can perceive the difference in quality between two objects, objectively describing it is difficult. Thus while we all agree that quality exists, finding a measure or metric for quality is a daunting task.
One of the dictionary definitions for quality is "fitness for purpose". This is the basis of a software quality metric: comparing the software to its specification. However, this presupposes that the specification itself is complete and of good quality. In addition, most open source projects don't have a formal, written specification to compare against. Thus, this isn't very useful for our purposes.
We're forced to recognize that quality is not a single thing. Attributes of quality include efficiency, reliability, usability, extendibility, portability, reusability, maintainability. Perspectives count as well. The user's perception of software quality will be different from the developer's perspective. The source code may be beautiful, but the program may be hard for the user to master.
So for software quality assurance, we could say that the code must do what it's supposed to do. We can also compare our code with that of the industry, and see if it works as well as theirs. Some OSS software has reached industry standard levels, like Linux in the operating system arena, while other OSS efforts lag behind.
Often during development, under time pressure, an attitude develops, "let's just get the software working, we'll worry about quality when we have time." Unfortunately, there never is enough time, and if quality is not built in from the beginning, the final result can be a hopeless mess that needs rewriting to become quality code. And there's never enough time for another re-write...
F. Gasperoni: Quality cannot be added on later, but it can be reverse engineered from quality software. Certification, for example, examines code to see that quality issues are met. >Franco>
The ISO definition is "all planned and systematic activities to ensure quality". This can seem contradictory with the volunteer nature of most OSS development. Many volunteer developers work when they have time, and their work is often not very "planned". So the fundamental question becomes, can you have assured high levels of quality when the developers are volunteers?
Since we've often mentioned "the community" in today's discussions, we should perhaps take a moment to ask ourselves, "who is that community?"
An example scenario: a graduate student at a university develops software while earning his degree. He finishes, and leaves the university. People in the university say to themselves, "Hey, let's open source this software. We'll put it up on a web site, and 'the community' will come and take over its maintenance." Companies can do the same with products they're no longer selling. Somehow, 'the community' will come and take care of this code.
But does this community exist?
Traditionnaly, in most OSS development, the work is done by volunteers in a distributed way. But more and more, open source is made by paid people who are collocated.
In EUROCONTROL's position, most work is done by paid personnel. But if amateurs build flight simulators and model train controllers, couldn't you get them to come into your community?
How could you motivate this community? Maybe the simulator builders could have closer access to aircraft profile data in exchange for their efforts. Or imagine that after some future air accident, the data and programs were made available to the community, with the challenge of finding a way to change the software to avoid this type of accident in the future.
This community won't come automatically. Probably, EUROCONTROL will be using a distributed, paid workers approach to development, but could also attract outside volunteers. You would have to be creative about finding a challenge.
Open source can coordinate in many different ways, from open groups where any developer may make changes, to restricted access for code commits. It is quite acceptable that your projects are managed and organized differently from other open source projects. There is no "the" or just one open source model.
In some projects everyone is allowed to make changes. In other OSS projects, you need to have permission to make changes.
In ATM you need test suite, but in most OSS application today, there is no test suite. But it is perfectly acceptable that the ATM community works in a different way to something that does not have safety concerns.
There is an underlying assumption behind most OSS that quality is high because there are thousands of eyeballs looking at the code, but this is simply not true in most cases. While Apache may be high-profile, top quality code, a careful look at Sourceforge will turn up thousands of dead projects with poor quality code, and no community around them. Since it is so simple to set up open source projects, many are created without sufficient resources.
Open source project management can be a problem, especially in large organizations like Debian, where release dates slipped by months, then years.
Academic work mostly concentrates on successful open source projects. Very little attention has been given to the failures of the problems. So we did a series of interviews on what problems were encountered in open source projects. We identified three areas of interest:
In conclusion, you must first identify where the community comes from. In your case, it would mostly be paid workers, but if you want to attract volunteers, you have to find incentives. If you want to use third party developed OSS, you will have to verify quality of the code, just as you must do with proprietary code. Finally, there is some academic work available on quality in OSS, so you can find some guidelines for high quality OSS.
from 23'00" to 29'35" (6'35")
J. Feller: We had insights from Debian. What about the BSD approach? >Jo>
O. Robert: Speaking about FreeBSD, we try to be less political. We do try to have some quality assurance. >Ollivier>
M. Michlmayr: Debian and BSD are not better or worst, they are just different with different beliefs. That is very important when you establish a new community, it's very important to make very clear what your community is about. You also have to think about the growth of your community, to be able to get rid of time-wasters and keep your best developers. Especially in this area, where you need safety, it has to be tightly controlled, but yet open enough to allow people to get involved. >Martin>
R. Schreiner: Comparing Linux and BSD, the objectives are a bit different. Linux developers write drivers for the latest hardware and that might generate instability. On the other hand, BSD does not want to support new features, so has a more stable kernel. It's a different approach. Therefore, the air traffic control industry would maybe lean towards BSD. >Rudolf>
M. Michlmayr: When you have a project, you must choose the most appropriate culture. BSD is probably better for mission-critical projects. Many BSD systems have uptimes of almost 500 days. Nevertheless, many Linux-oriented projects are very strict. >Martin>
J. Feller: The distributed/collocated by paid/volunteer matrix is a very valuable planning tool. If you concentrate on collocated non-volunteers -- another word for that is "employees" -- institutionalising a process of sharing and learning will enable you to then go to the distributed non-volunteers, which is where you get to partner with other organizations.
If you actually had EUROCONTROL and EUROCONTROL's equivalents in other regions and the major manufacturers in on the game, you'd had no trouble at all attracting the efforts of people who want to learn and to contribute, and be part of all that, because suddenly you have this critical mass of industrial and government community that represents an incentive for young professionals. There's a big difference between having the tools available to facilitate collaboration and giving people a reason to do it. >Jo>