Linux Standards, And Why They Shouldn't Matter
The old debate about lack of standardization among different Linux platforms flared up on Slashdot this weekend. It’s a complicated topic, with complex arguments both for and against greater consolidation among Linux programming interfaces and distributions. It’s also something that would be a nonissue if developers could find the courage to declare certain subsets of the Linux community more worthy than others of support, based simply on their size.
As the Slashdot commentators point out, programmers trying to write applications for Linux–especially proprietary ones that can’t easily be adapted by third parties to work in nuanced environments–are hampered by the diversity of Linux APIs and subsystems. If only there were greater standardization among distributions, desktop environments, sound systems and so on, the argument goes, Linux would become much more attractive to developers, and by extension to users.
There’s no doubt that the wide variety of Linux environments makes it difficult for developers to support all Linux users. Producing binaries that will “just work” on any distribution, regardless of its nuances or customizations applied by the user, is nigh-impossible.
Majority rule
But the fact that supporting every Linux user is unrealistic doesn’t mean that writing applications for many or most users is a lost cause. Sure, there are hundreds of distributions and dozens of desktop environments out there, but only a few have any real traction. Rather than trying to support every obscure environment, developers should simply shift their attention to the handful that have enough users to make the effort worthwhile.
One of the chief reasons developers remain reluctant to target only specific Linux platforms, rather than the entire spectrum, is the inevitable backlash that would result when die-hard geeks are told they can’t run application X on their custom-compiled kernels and obscure desktop environments because the application developers have deemed their system too difficult to support. They”ll whine about their freedom of choice being violated, and complain that their platforms are technically superior to mainstream environments. Thanks to the backlash of such militant geeks, developers will feel alienated and unappreciated, and will conclude that since there’s no pleasing all Linux users, they may as well not bother trying to please any.
Such logic is very unfortunate for the vast majority of Linux users who run mainstream environments and just want their computers to work as well as possible, without being encumbered by ideological debates about freedom or arguments over whose C library or audio subsystem is most stable/developer-friendly/maintainable/has the coolest name.
The bottom line
If vendors like Adobe and Google want to support Linux users, they should release applications for a realistic subset of environments, and be willing to announce explicitly that Linux support will only be available on mainstream platforms. There’s nothing wrong with favoring Ubuntu, Fedora and SUSE over obscure alternatives based on the simple logic that most people use those distributions. It might not be a popular decision among militant geeks who aren’t satisfied until they’ve imposed their agendas of radical equality on everyone and everything, but for the other 99% of Linux users, targeting a limited range of environments in order to simplify development would be a welcome change.
Follow WorksWithU via Identi.ca, Twitter and RSS (available now) and our newsletter (coming soon).
PulseAudio still has the coolest name. d:
Seriously though, I agree. I even agree in a radical way, meaning that Linux distros shouldn’t even be called Linux distros. In my oppinion they should be called operating systems. So let’s say Ubuntu is an OS, and if you want to add some specific details, then it’s GNU and Linux based OS with GNOME as the default desktop environment. Saying that this or that app supports Linux, and adding a Tux icon at the downloadpage is misleading. The average user should not care what’s the name of the kernel he’s using, just the OS. Like Apple is not advertising its OS as a BSD distribution. It’s Mac OS X. If you want to install an app on it, you woun’t search for a BSD app, you search for a Mac OS X one. And the program writers are not saying they support BSD or Darwine, when they make a Mac OS X app, they say they support OS X.
So yes, I belive if someone’s making a proprietary application for a GNU / Linux based OS they should just say: “I made an app for Ubuntu and Debian.” or “I made an app for Fedora and Red Hat.” or “I made an app for Fedora, Red Hat, Ubuntu, Debian, Suse and Arch” even. Of course they could also add a binary installer like there’s one for CodeWaver’s CrossOver, saying that it should work probably with every GNU and Linux based OS, but that should be intended for the hardcore John Freetard.
I think it really boils down to installed user base and opportunities for revenue. Less than three percent of the PC world, coupled with splintered factions that are already predisposed to free software simply isn’t THAT attractive a market to enter. While yes, three percent of a billion is still quite a bit of users, why bother when there will be several other free deviations of your app?
Canonical’s on the right track by focusing on trying to an exceptional user experience which continues to evolve. Just like Mac’s, as the user base evolves, the developers will come. Joe Freetard will become less of a voice to a distro like Ubuntu because they will continue to use byzantine methods to compile Gentoo from source and change all of their config files to create what they like. The key is to keep the wants of the hard linux users with the needs of the transitioning users.
No, I disagree, this is against the spirit of open software: we want cooperation, openness and freedom. We want people not to be forced to be Ubuntu or whatever. Following standards and offering desktop-independent, distro-independent API’s and standard location of files and such is the way IMHO. In fact, vendors are already working the way you described, but I think this is the wrong approach.
Consolidation would be great, and would help Linux actually become relevant, but it will never happen. Too many egos and idiots screaming “LINUX IS ALL ABOUT CHOICE”, as if users *want* a choice. (Hint: They don’t. They want one solution that works well.)
Good article. I also agree with “Yeah” above: users do not want choice and neither do developers. No one wants to waste time sorting through a tangled web of half-assed “choices.” The choice argument is stupid when applied to distros, and it’s even more stupid when applied to core APIs.
Fragmentation is not choice. It’s a cluster-frak of egoism.
Freetards? Do I really have to say this? Anyone who disagrees with you or agrees with the principles of software freedom is an idiot? You just opened the door to being called one yourself.
It still amazes me to hear people claim consumers don’t want choice, or are afraid of choice, or are confused by too many choices, when in any other market, choice is what drives competition among vendors and excitement in consumers. I’d be interested to know which brand and model of automobile, television set, computer, cell phone, or beer some of you think the rest of us should settle on.
But this argument is not even being driven by the invalid assumption that consumers don’t want choice, but by the perception that vendors think it is too hard to target multiple platforms and can’t figure out how solve the problem. I don’t believe that’s how they think. If they saw an opportunity to make a lot of money, they would find a way to do it, and there would be no need to hold their hands.
Frankly, I think many of them are afraid to compete in an environment that already has as many free applications as Linux does. There have been platforms with fewer users that had no problem gaining third-party applications fairly quickly. When enough potential exists for making money, even fearful proprietary software vendors will overcome their reluctance to port applications to as many distributions as they believe will make them money. And by then, there will be plenty of idiots lining up to part with theirs.
I hope this goes through, as my previous post didn’t (maybe my fault). This post, once again, is against the spirit of free-software: we do want choice, and we do want freedom. That’s what free software is all about.
WWU has become a tribune for people to promote a change in Ubuntu whereby it no longer follows the principles of free software. If you have to renounce to your principles to achieve your goals, what are you left off with?
As Scott said, we can have different ideas, without being called a retard, and idiot or a stupid, as it already happened in the comments to this post.
I originally subscribed to the RSS feed of WWU to follow the news on vendors embracing Ubuntu, and that has been good. But the commentaries, I personally find both inflammatory and contradictory to the spirit of free software, which is the fundamental foundation of Ubuntu.
Maybe this site could be use one section (with its feed) for news and one for Editorial stuff, at least that would be useful for me.
Standards to which people conform and APIs meant for programmer consumption might be barking up the wrong tree. The only viable solution to any of these ‘problems’ is one that doesn’t force one side to conform to the demands of the other.
Consider the problem of drawing a line on a plane. You (logically) need a starting point and an ending point. Should you use a point struct? An object? Who’s implementation of the Point Object? Should you pass the function 2 2-tuples (x1,y1), (x2,y2), or should you pass 4 ints?
The reality is that it doesn’t really matter. It’s the job of our software to determine how to do what we ask it. That’s reflected in the move from machine-level languages to higher-level semi-human-readable languages. One of the unfortunate consequences of this chain is we’re stuck in a rigid framework of syntax.
Our systems should (and can) be flexible enough to infer most of what we want. It may require some semantic markup to guide the process, but there’s no reason that APIs and dependencies can’t be resolved and swapped on-demand. Such an approach would provide an incredibly light-weight platform which would enable true code portability… and a more more sensible development process.
Don’t be so anti-semantic.
*whose
*much more sensible
The next-best-thing to being able to edit 🙂
Josh, beautiful point, but I think in that approach you lose type safety and things can go very wrong. There is also the efficiency. It’s not the same to pass a heavy structure by reference that to pass it using markup 😀
* that = than 😉
Leo: I don’t think that encouraging vendors to support certain Linux platforms, but not others, based on the size of their user bases contradicts the FOSS philosophy. It just makes sense and is the only realistic way to get mainstream commercial applications released for Linux (or at least some Linuxes). It wouldn’t mean that FOSS users couldn’t choose more obscure platforms if they wished; they’d only have to stick with the mainstream distributions if they wanted to use certain commercial applications (which they probably wouldn’t touch anyway because they’d likely not be open-source).
As for the post being inflammatory: I read it over again and removed some inflammatory language–which sounded alright late last night, but after a second reading was a bit immature. My apologies for language that was disrespectful. And I will pass on the idea of separating the site into different feeds based on the type of content.
Scott said: “It still amazes me to hear people claim consumers don’t want choice, or are afraid of choice, or are confused by too many choices, when in any other market, choice is what drives competition among vendors and excitement in consumers. I’d be interested to know which brand and model of automobile, television set, computer, cell phone, or beer some of you think the rest of us should settle on.”
Sure, consumers do want choices when it comes to products. The choice in computing is Windows vs. OS X vs. Linux. Or, possibly, Picasa vs. Adobe Photoshop Elements vs. iPhoto. That’s choice, NOT PulseAudio vs/plus ALSA vs/plus OSS vs/plus GStreamer, etc. Do you see the difference?
Standardization does not reduce choice, it merely creates a coherent platform for which to deliver choice.
By the way, there is a significant amount of market research that shows that consumers are in fact made unhappy by a large number of seemingly irrelevant choices. We have all experienced this to some degree. Walk into any grocery store and try to find a tube of toothpaste. At my local store, I can choose from about 40 different brands. I usually choose whichever one I’m used to. I am not interested in trying all the different brands of toothpaste because I’d prefer not to spend so much of my life trying out toothpastes.
I agree with the other Josh (13) as far as choice goes. For the most part, I’m after freedom *from* choice (as opposed to freedom *of* choice). The reason for that is very simple:
Widget X does everything I want, but it doesn’t have vital feature A.
Widget Y misses some of the good features of Widget X, but it has vital feature A.
I could care less about having the freedom to choose flawed Widget X or flawed Widget Y. What I really want is Widget Omega — it has the good features of Widget X and the good features of Widget Y combined into one solution. For those that really liked Widget X or Widget Y, Widget Omega can be configured to act exactly like either Widget X or Y.
I think the driver of competition is more subtle than ‘choice’ — competition is the result of flawed production and producers being necessarily out of sync with consumers. The marketing department wants to claim features A, B, and C, but some people only want A, someone people want only B, and … some people want another permutation of the features. It’s a problem of trying to beat the competition versus trying to do What’s Right — produce high-quality, well-documented, well-tested components that can be personalized and composed according to a unique individual’s unique preferences … and in a way that doesn’t require Arcane Knowledge (i.e. intuitive).
@(Leo, 10):
I glossed over things with the ‘semantic’ buzzword, but because of the nature of this sort of system, you would actually have more type safety. Semantic annotation would allow the precise usage and meaning to be known, so that your methods wouldn’t be validated by type-signature (oh, I’ve got 2 ints, and this thing takes 2 ints), but rather by *meaning* (oh, this function takes 2 ints, but one is supposed to represent a temperature in Kelvin and the other is supposed to be the distance from the center of a Binary Pulsar, so it looks like these two ints that I’m trying to pass actually don’t *fit* into this calculation, as they were the current GDP of Zimbabwe and the number of licks it takes to get to the center of a Tootsie Pop).
As for the efficiency issue … it’s not as large of a factor as one would initially fear. A lot of the checking/safety would be done compile/design-time (as would the majority of type checking), and the objects which are being passed around could still be passed around by reference — reference to an annotated object, perhaps?
This post is irresponsible. Linux isn’t a proprietary operating system it’s an ecosystem. And while differences could distract developers, 99.9% of all linux distributions are the same. I’ve read posts here in the past and thought there were some good views, but this one is casually destructive. Commercial applications do get written for Linux (a good number of games, OSS…). Somaybe you should be referring to building a common API instead of going maverik.
Why do people always seem to forget about .tgz and then leaving it to the different distros to compile it for themselves?
Personally, I’ve never had a problem compiling a program from a .tgz, except missing dependencies. It doesn’t take long, and with the proper tools, it’s not overly difficult to figure out that you need to install missing dependencies (esp. if they are in the repos).
The only issue here for the companies that don’t want to release source would be an automation of compiling the source code for us, or they could use something like a .run package to download and install it automatically (I’m thinking of the Nvidia driver manual installation process here).
I agree with Jordan.
1. Write a good app that the distributions pick up and make sure work in their release.
2. ???
3. What’s the problem…
I personally would like to see some standardization to make it easier to develop and install third party programs. It will encourage them to support Linux and more importantly entice more users from Microsoft. Could this make it easier managing distros as well?I have no idea how this will work as the open source world finds it hard to agree on things. But is it possible? Does it change the concept of what a distro is? Can there be some sort of translator program that can allow a one step trouble-free install?
Supporting the main distros? I would like to this as a short term solution, second best option. As a comment above it is the most realistic option at this point.
Zac: you nailed it, this is a short term solution. But what I don’t understand is why this is proposed like something new. This IS happening. Oracle partners with Redhat, IBM Partners with SUSE and RedHat, etc.
In the long run, standarization will help what everyone else is asking for: consolidation. But not de-facto consolidation. Evolutionary type consolidation: since I can run Photoshop in ANY distro that follows the standard, then I’ll chose the BEST (not the one that partnered with Adobe). This will drive healthy competition and make 95% of the users to use 3 or 4 distros, like it has always been anyways 🙂
Totally removing the text referred to by comments removes all context for them. I guess I should have quoted your use of “freetard”. Now, no one reading the comment will have any idea what I was talking about. That’s bad form on your part. “Fool me once…” as they say. Rest assured, it won’t happen again.
Perhaps it’s up to distribution developers like Canonical to make their particular flavour of Linux attractive enough to developers so that developers and consumers (of all kinds) will decide what the standard is.
Too many people are still thinking the kernel is the be all and end all of the OS. Just because we’re all using a version of the same kernel doesn’t mean we’re all using the same OS. There’s a lot more to an OS than the kernel.
Take for example FreeBSD or openSolaris or Minix or Mac OS X. Should their applications all run on Linux without alteration? After all, all of these OSs are based on UNIX. They all work to the same or similar standards and protocols. They are all different implementations of broadly similar concepts.
A standardised way of distributing proprietary application already developed and is in deployment. It is the LSB (Linux Standard Base). It is a common set of APIs and ABIs that any proprietary developer can expect on an LSB compliant distribution. No need to say we support X distro. Simply list LSB v3 as a requirement. Just like people list Windows Vista, Mac OS X as requirements. There is not a difference.
I totally agree with the writer of the post.
It is the only way to make some distro a main stream OS.
Whatever happened to statically linked apps. Remember the good ol’ days. Windows and OS X apps worked because vendors often shiped dependencies along with their apps. the side effect was that poor installation systems buggered your Windows installs with incorrect versions of libraries (usually deprecated versions). Windows QT-based apps ship with their dependencies, apps that need readers come with those readers. In recent times users have been required to go to some exotic process of getting dependencies that make their apps work. In Linux it becomes one heck of a cyclical process where dependencies have dependencies that have dependencies, which makes it difficult to support everyone. How do apps like Opera do it, while maintaining the all-desirable attribute of compact download size. Standards base tries to solve the problem of change by simply avoiding the subject altogether, kinda like reigning in street slang by creating a snapshot that defines the boundaries of language by a time reference. Think of how many changes have come over the past few years with the X and sound servers, dbus, file systems, hardware support and so on.
I believe the eventual outcome is that companies like Google will adopt fledgling projects, nurture and mature them into the next generation of software that fills the gaps that exist today. For example, I can’t tell you that I ever heard of clatter before Intel announced Moblin. The redeeming quality of open source is that choice and freedom exists for companies as it does for individuals.
I found earlier this morning this post on a similar topic http://www.raiden.net/articles/oh_no__linux_standardization_is_the_end_of_the_world__not/
@Bogdan: excellent article.
The gist of the article is hardware standardization happened YEARS ago. Anyone remember the PC clone wars that yielded the buzzword “IBM-compatible”, then rendered it obsolete? Apple did not fold, but proceeded to crush the “Apple II-compatible” Franklin Ace offerings, when Steve Jobs was at NexTel. However, the PowerPC CPU finally folded to Intel and now Macs don’t have as much proprietary hardware as they used to. Alternate OS virtualization was discussed on Apple computers at least 20-25 years ago, and now… Boot Camp. How many forget the former, but instantly recognize the latter?
So why is software standardization so dreaded when it could very well be inevitable as well?
@Chris Tozzi:
“Militant *geeks?*” Come on, use the original old-school word, hideous though it may be: NERDS. I mean, let’s be realistic: geek usually implies somewhat of a sense of social aptitude. Nerd… I shouldn’t have to complete that sentence. Stand back and look at the user world at large, and you can almost always find a subset of users who tweak everything to the way they want it. Linux tweakers just happen to be much more fluent in code much of the time.
I really support what you’re saying, although I will admit some articles summarized it in a much more elegant and diplomatic way than here. I agree that “choice” and “it just works” should not be contradictory statements.
@Josh (13): “Standardization does not reduce choice, it merely creates a coherent platform for which to deliver choice.”
Aye. Much easier to refer to machines on the whole (clocks, cars, etc.) rather than to explain them by the individual nuts and bolts.
For example: how many people honestly build their cars from scratch? Those that customize usually do it after-market. This shouldn’t be a foreign concept to computing.
[…] market share and a huge user community#8221;, you#8217;re calling people #8220;freetards#8221; (http://www.workswithu.com/2009/06/01/linux-standards-and-why-they-shouldnt-matter/) and #8220;free-software militants#8221; […]