skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces
      Monsters'

Oblomovka

Currently:

Archive for July, 2004

2004-07-30

notes on: splitting books open: thoughts on the digital future of technical documentation

Okay, now I joined Andy Oram talking about Splitting Books Open: Thoughts on the Digital Future of Technical Documentation.

He’s talking about the difficulties of “community documentation”. Describes his struggles to find out how Trackback works. How hard it was — there’s an ecology of self-education on the Net, but it’s hard to drill down to the right place. So traditional documentation has some place in the modern technical world.

What are the advantages of a good book? Can this be replicated in community education?

Firstly: Pace. Revealing the answers in the right order. There’s an idea of audience – technical reviews, focus groups and so forth. Background: books can gather together relevant and enlivened background. Structure: like pace, but on a larger scale. “Life in general is getting less structured. Even the military is much more flexible than it used to be.” – so we can tolerant less structured documentation.

Questions answered by good books: What’s the range of problems that this tech solves? How do different parts interact and alter each other’s behaviour? What are the strengths and weaknesses of different solutions”? What am I responsible for once I adopt the technology?

A step toward the solution: Safari. (Just sketching this out, not talking about Safari) – subscription service, profitable, there’s a competitor called SourceBeat which looks interestd.

Stuff that’s wrong with Safari (fun seeing this from an O’Reilly guy). Material is essentially identical to print books. There’s a nice Eclipse plugin, ORA should open source it. Lots of work to do, lots of potential. Other publishers should join Safari, or start their own service to push us to do more from it.

Comments: people saying the thing they really want is Safari offline. Everybody agrees it’s tough because of warez. Andy Oram: “Yeah, but people rip us off all the time! Will it make it worse?”.

Potential future roles for publishers: Authors may contract out to publishers for particular tasks: editing, layout,, art, indexing, tech review. Publishers might help with publicity, getting book reviews.

Improving user education: I am trying to apply what I know about good documentation to USENET, mailing lists, Linux Doc project, etc.

1. Urge active community participants to become formal contributors. O’Reilly often finds people on mailing lists, and pluck them out. Project folk should do this more – find out what motivates them, offer rewards, Wiki concept is intriguing but too new to judge effectiveness. (Comments from audience: Wikipedia good indication they will do well. Wikis tend to sprawl to cover everything. Yeah, but some pages become hotspots. Plone guy: We’re moving away from wiki for documentation, because the amount of refactoring increases exponential, so we’re moving to a more structured HOWTO system, with named authors, ownership etc. Andy Oram: Wiki is a good place to collect information, then someone goes through and pull out the info. Audience seems to agree.

2. Incorporate professionals into community documentation: problem is these professionals cost. Whittle down what is needed to the point where authors can afford professional help — just tiny discrete tasks. Or what about sponsorship? Companies must love decent documentation, right? Audience: companies don’t even give money to their own documentation. Perl’s good documentation was written while the authors were writing O’Reilly books, so there’s a weird kind of sponsorship here.

3. Nurture new users; don’t repel them. Community has to take responsibility for each member’s learning. Dump “RTFM” from your ammunition bag. These may be people you need to hire in six months.

4. Point people to documents: many useful explanations are buried. Create flexible pathways through documentation. Make use of professional developed documentation. Large volumes of documents will be overwhelming.

Plone guy: we have an embarassment of documentation. So we’re structuring, and it might be useful for other projects.

Enhancing rating systems. Let readers rank documents. There are problems, but then there are problems with reputation in the real world. Willingness to tolerate bad advice varies with the subject matter. Documentation in Plone “ages” (the ratings go toward the mean over time).

Ancillary failings of user educations: it favours English-speakers. There are translations, but it’s hard to keep things in sync. Cultural differences aren’t respected: we expect people to ask questions, or stand being flamed. You shouldn’t have to give up your cultural background to learn information. Different learning styles not respected. There are gaps, and haphazard coverage.

notes on: open source renaissance- what novell is doing to make open source a mainstream reality on thedesktop and server

Welcome to another day on a bus tour of OSCON, seated at the back uncomfortably close to the voices in my head. I’m currently in the Rumsey keynote, and I’m not writing notes about it, because you really have to sit and watch these maps and see what David Rumsey has done with it. Kidnap someone with a fast broadband connection, go to the David Rumsey Collection and click around like a mad thing. Note to self: we have to drag this speaker to the next NotCon. It’s the most inspiring thing I’ve seen in some time.

Okay, now it’s David Patrick who is CP at Novell.

Not going to talk much about Novell (boo!), but what’s going on in Open Source from the POV of business. (I’m pretty sure this is Patrick’s standard speech to businesses, which is usually a very bad sign, but it’s actually quite a good insight into what businesses are worried about with open source). David Patrick was CEO of XImian, working with Miguel. They put it together commercially five years ago. They went out to CEOs and IT people at Fortune 500 companies and pestered them with questions. Five years ago, very few people knew about open source. Now they do.

If you’re over 35, you probably know something about Novell. They’ve been around a long time. Novell bought Ximian, S.U.S.E.; we went from a proprietary company to an open source company. We’ve also started marketing. We’re not known for marketing. (“Freedom to choose Linux or any other software you damn well please” is the ad he has up on the screen.

Novell was a Windows shop. On March 31st of this year, they cancelled our Windows licenses. They’re no longer an active licensee. They say they’re their own case study. The steps are: moving to OpenOffice, then they’re moving to a Linux desktop. Two thousand out of six thousand have already moved. When Novell Desktop ships, the rest will be moved. They’ll be documenting all of this on the Web

The concern from companies is that there’s no money to be made out of open source. What you see is the evolution of new business models: MySQL good example. Licensing, maintenance, support, training is the usual business model. In open source, less emphasis on licensing, more emphasis on the other three. It’s a good fit with how the industry is changing. We’re becoming a more mature industry — less about finding new customers, more about looking after your existing clients.

We need to educate everyone – customers, businesses, and each other about the nature of free licenses. That’s very critical, he says.

What are the myths that we deal with when we talk to enterprise customers about open source.

Myth 1: Open source will destroy the software industry. Open source doesn’t spell the end of proprietary software.

Myth 2: “Open source isw a have for purple-mohawked hackers who write sub-standard code” – quote from CTO. Really, sixty percent are software professionals with >6 years of experience. And he says that I don’t care what colour hair they have. The code quality is really high.

Myth 3: Open source is a fad. Yeah, right.

Myth 4: Open source is “The Right Way” to develop software. It’s a great way, but it’s not the silver bullet. “Buddha wasn’t meditating on open source software”

Myth 5: There’s no money in open source. We’ve invested $260 million in Open Source. (Rather notably, didn’t actually explain how to make money. Just talked about how much has been invested in it. But you know…)

Myth 6: Open Source is free. Sure, downloading is free, but it’s not free to the enterprise.

How do you succeed? You need very precise focus. You need to understand complements and substitutes. Does open source complement what I’m doing? Substitute what I’m doing? Is there something out there that’s already doing what I need?

Now he’s talking about licenses. You can tell this is really his talk that he gives to businesses, but that’s not as bad as it might be. He’s saying how funny it was that the Ximian coders knew more about the subtleties of licensing than their corporate attorneys, and that knowledge needs to be spread.

Something that’s really helped is more universal computer literacy. Nice anecdote about his fifteen-year old daughter and how she hassled him every day over the problems she had with her Windows machine, so in the end he moved her to SUSE, and now she looks at him like he’s an idiot when he asks if its working okay.

The Internet was vital to open source. Most of the early Ximian engineers were hired off of IRC.

What else drove open source? Needs not being met by proprietary software. Especially the “one size fits all” idea – you have to change your company to fit our software. CEOs say they’re tired of being held hostage by the vendor.

Ton of money being invested in Linux – billions by IBM, Novell, RedHat.

Linux is a substitute to Windows. We have to make decisions constantly about whether we’re going to implement something like JBoss, or just use our engineers to improve JBoss.

We’re a cooler company since we adopted open source; we’re not ashamed any more. My kids aren’t embarassed to wear the t-shirts.

We think the desktop is very close to on par with Windows. We believe in the next generation will be on par, and we can beat Windows at a $50 price point.

How do companies move over? Look at complements or substitutes — do you add open source stuff, or replace proprietary stuff. What relationship do they want with vendors? Do you need 24/7 support? Or are you pretty self-contained? Tying oneself to open source solutions isn’t something he believes will happen. It’s going to be a long time before a customer can switch entirely to open source. Seeing it creep up the stack – low level, up to desktop, CRM. He doesn’t see anyone using a free software tax product evarrr.

2004-07-29

notes on: protecting your open discussion forum

Jamie McCarthy is talking about how Slashdot defends itself from various attacks (from DoS to “just jerks”). I managed to resist the temptation to sit at the back shouting “FIRST POST!” until they threw me out.

So far there’s been a great slice of the life of trolls, extracts from their scripts, IRC chats, and the effects on sites like the Wil Wheaton site. Jamie says he’ll be putting his slides up on his blog later.

“The more an attacker has to lose, the less likely they are to attack your site.” What if they’ve given you money? Or they might lose their job? Or lose access to your site? Seeing is gaming means: if they know about the rule, they’ll try and beat it. So if you ban something, and they find out a way of bypassing the ban, they’ll find another way. On the other hand, if you remove the visibility of the result, then they don’t know they’ve won. If they can score it, they will definitely game it. So trollers try to get slashdot stories into the “Hall of Fame” site. Or trying to get everybody to make them their slashdot friends. Same thing with Orkut: they have a top ten, so that inspires people to increase worthless links. Classic example here is Slashdot karma. We made two mistakes with karma. One: we called it karma. Second: we made it a visible number that was unbounded. When we made it into an adjective, karma-whoring collapsed. People want consistency — if you change the rules, innocent bystanders get mad. People don’t mind draconian rules if they’re consistent.

Trad exploits are out of scope for this discussion. Forum attacks are flooding, spamming. Doesn’t have to be comments. Any link on your site is the same: trackbacks are the same.

First defence is to make it expensive. Code can’t distinguish between an ingenious flood attack and a good discussion. Always close out old discussions on topical sites. Make the attacker spend time (geometric increases)

Or spend IPs. Open proxies are the main force multiplier attackers here: we need to stop open http proxies. http is the new smtp. reputable anonymisers aren’t a problem, it’s the 10000+ dumb open proxies. test ports 3128, 8080, 1080, 80 proxy port of people who comment.), accounts. Slashdot has a LWP::UserAgent patched to cope with multiple proxy tests. People complain about port scans, but they understand. Since April, when we implemented this, the crap floods diminished. Ask me again in a year to see whether this has worked.

Or spend accounts. You do have to assure that the email is valid by sending them the confirmation link. Watch out for robo-created accounts. They will usually just come from a single domain. Three hundred accounts from hotmail = normal, email accounts from evilbadmail.com = suspicious. You can make new users voices quieter – moderated by a human for the first or second time. If the account has to participate in a human way, that helps. If you want a low barrier entry for your site, you need to preserve newbie posting rights. And if so, you can only track by IP.

As an example, someone at OSCON (trolls! trolls amongst us!) created an account, and posted nasty comments so that the IP was banned. But it didn’t work, because we took into account that users of good standing can still post (anonymous posting *was* banned, which explains why all my hot grits links got dropped).

Captchas don’t work. These simply require that humans need to be brieflyG present. These don’t reduce your rates enough — only helpful if you’re trying to move actions from millions to thousands.

Other solutions: host troll discussions. Make it an easter egg, so they think they’re gaming you. They’ll preserve the site; and you can see what they’re doing. Visit where they chat. Offer a bounty (Wil Wheaton offered a $1000 that leads to prosecution). Or file a lawsuit. FreeRepublic filed for an injunction. Instead of writing code to block a troll, they put out a legal injunction.

Jamie’s summary for protecting your open discussion forum:

notes on: book sales tell us about the state of the tech industry

Tim O’Reilly is exposing his market research – this is sort of the sequel of a great talk Tim gave at Foo camp where his team crunched through O’Reilly book sales to try and work out new trends.

As ever, I missed the beginning. We join Mr O’Reilly as he is asserting very carefully that they have no idea what the correlation of these stats are with actual sales, then pointing out that MySql just overtook Oracle. The stats themselves is careful picking over of U.S. book sales from BookScan (not just O’Reilly). What follows is hand-wavey notes with me scrabbling to take notes, but the actual graphs Tim says will be made available publically soon.

.NET books just past Java last quarter of 2003.

Technology Supplier Market Share: Microsoft at 35%, all of open source at around 20%.

Programming language market share: 70% open source (including Java), 30% is proprietary (Visual BASIC, etc).

Version trend information — Photoshop, Dreamweaver, Flash: sort of bell-shaped, new versions spike and then slowly decline, total zig-zaggy but basically constant. With Red Hat Linux, though, after Fedora, RH books sales collapsed. Might RH have made a bad decision? Total market went down pretty quickly.

Macintosh has 3% market share? Not in books – Mac books sell 25% of all computer books. (Linux 17%, Windows 55%).

TIm’s now showing a bunch of tree maps which aren’t really easy to describe. But that’s okay, because you can play with them yourself.

Question from audience: is there any geographical patterning? This is just U.S. sales, although they have U.K. data. Wanted to sort by geography, but didn’t have time.

Question from audience: will you be publishing your categories, the ontology you’ve deduced. Answer: yeah, we really want to in our copious free time.

Are there any hotspots? Answer: Bay Area, Los Angeles, New York. Best tech shop is in Virginia. Weird, except they’re right next from Langley.

notes on: state of the unix kernel

Greg Kroah-Hartman sez: we don’t have a development kernel anymore. He’s a kernel developer for a long time, a kernel maintainer, PCI, USB, drivercore, sysfs… “lotta crap like that”.

Pre-Bitkeeper, this is how they did it: maintainer sends patches to Linus. Wait. Resend if patches dropped. Lather, rinse, repeat.

January 2002 – patch penguin lkml flame fest. Worried about patchs getting dropped. Linus decided to use BitKeeper, 2.5.3 first kernel. “License weenies got their panties in bind, ah feh.” (Greg is like the Vin Diesel of kernel dev: I keep expecting him to shout “I! LOVE! THIS! DEVELOPMENT CYCLE!!!”.)

BitKeeper change things a lot. Bitkeeper maintainers got a lot more work. Linus sucks in all patches, discarded crappy stuff. Non-bk maintainers just sent patches as it goes.

Unexpected consequences: we knew what Linus was doing, so much better feedback. Bk2cvs and bk2svn trees. bkcommits-head mailing list so you could you see what was flying around.

All patches started to go through Andrew Morton; BK stuff goes through Linus.

Result: 2.6.0 = Two years of dev, 27149 different patches, 1.66 patches per hour. At least 916 unique contributors. Top developers handled 6956 patches. Ten patches a day for two years. People should research this data.

2.6.1 538 patches. 2.6.2 = 1472, 2.6.3 = 863 patches, 2.6.4 1385 patches…. keeps going. bY 2.6.7 2306 patches. Something is wrong. This is the stable kernel, but we’re going faster — one million lines added, 700 thousand lines deleted: a third of the kernel. A *lot* of data, and this is *stable*. Feeling uncomfortable yet?

All patches end up in the -mm tree. All bk trees end up iin the -mm tree. 17 different trees, acpi, agpgart, alsa, … So we have a staging area.

And 2.6.7 is the most stable kernel we’ve ever had. So we’re doing something weird, but right.

The -mm tree *is* the development kernel. Patches can go in, and if they suck, they can get out really quickly. Linus gets the good stuff, the -mm tree is the tester’s tree.

Almost everything is tested in the -mm tree before it goes to Linus.

Near future: Linus releases 2.6.x, maintainers flood him with patches, all have proved their stability in the -mm tree. We all recover, fix bugs. One month later, a new 2.6 is released.

Will 2.7 ever happen them? Maybe, if we have big intrusive patches — page clustering, timer core rewrite, stuff that touches everything. Linus’ll fork 2.7, all 2.6 changes will be merged into 2.7 daily. If 2.7 is unworkable, we delete it (so save your 2.7 patches). If 2.7 works, we merge to 2.6, call it 2.8.

Summary: the kernel developers all like how this is working. No stable internal-kernel API, never going to happen, get used to it (syscalls won’t break). Drivers outside of the tree are on their own (quote: “you’re screwed”) (conclusion: get into the kernel, quick before it runs crazily away in a big honking yellow bus of ever-accelerating development.) Proprietary people: if you’re not in the kernel, Greg says he doesn’t know you exist. And if you don’t give back to the community, he doesn’t care, either. (“Just my opinion”)

Everything subject to change at any time. These days, Linus is running the stable tree, -mm is running the unstable tree. When they realise this, they might change their mind about how things are doing.

Q&A: ISV like Oracle are going to go crazy, because they need to catch up with Greg and the kernel developers. They can’t just certify an ancient kernel and stick with that anymore.

Q&A: We don’t want the distros to fork, as happened when SUSE and Redhat settled on differenly patched 2.4.

Q&A: What about automated testing? Big companies have a test suite – hopefully, they’ll start running them every night. Novell has a giant test deparment, and they’re ramping up to do nightly test runs. If they can get that under 24 hours, we have a big honking regression test.

Q&A: (from me) What are the limits on the speed of kernel development now? “We’re going so fast, there’s a huge rate of change. Andrew seems to scale pretty well; I’m maxed out; more people coming in. We’ll find out what the limits are.”

Q&A: How many people working on Kernel at IBM? Answer: don’t know, Greg emphasises that he’s only speaking for himself.

Q&A: When are you going to fix broken stuff, like Firewire? “Well, it should build.” (heckle: is that your testsuite then?) “Yes. We can’t test drivers. You need to write them themselves.”

Q&A: The problem with automated tests for drivers is that it needs hardware. Who is going to pay for that? “I’ve written drivers that I don’t have hardware for; I know someone who has debugged joystick drivers via Irc between Czechoslavakia and Israel. ‘Push the joystick left? Does it work?'” Some stuff isn’t easily testable. We need to fix the infrastructure. OSDL trying to fix this.

Q&A: How can Andrew and Linus do anything but rubber-stamp thousands of patches. Answer: we’re not writing much code these days. It’s a trust network — I trust people who send us stuff. And now we have a blame network, too, so if somebody sends me something that breaks, I know whose fault it is.

Q&A: Is there still a problem with tracing origin of code? With BK, I can resolve all lines to a email address and a name. The lawyers I speak to say that’s okay. All patches get a “signed off by: name and email”. That’s a little more explicit. Not legally binding. Once again, it’s a blame issue. I’ve seen patches flow through five people. More people to blame!

Somebody was sending me code on plug and play, and it just seemed too good: conformed with all the Microsoft specs, worked really well, made me very suspicious. So I made him prove it, send me all the places that he found it the stuff. And I was finally happy with the provenance. Later it turns out he was seventeen years old. You can’t tell where you’re going to get excellent code.

Oh, btw: Greg says there’s a really good piece about this switch in the latest issue of Linux Weekly News. I don’t have a subscription to LWN, for ridiculous personal reasons (I feel odd using paid-for info to compile NTK. Same thing why I don’t like taking journalist freebies: I believe I will be struck by lightning if I do. Previous experious indicates that belief to be justified.) But if I wasn’t so weirdly idiosyncratic, I’d pay their sub rates in a shot.

notes on: make magazine

Make: Dale Dougherty, publisher of Make Magazine. Mark Frauenfelder is the editor. Will be coming out in Spring. Here to have a discussion; they don’t have a presentation, they just want to work out what they want to do with it. :hey’re here to increase their pool of contributors, rather than subscribers.

Three streams: Dale developed the Hack series for O’Reilly. Really smart people are doing interesting clever nonobvious things with computers, and other people would find useful. Another book project inspiration was the “Hardware Hacking Projects For Geeks” – like replacing the sound chip in a Furby.

Started because Dale was in a cab with Tim O’Reilly saying “there isn’t a Martha Stewart in the technology space – somebody who rediscovered and recovered crafts and gave them to a wider public”.

A move from mass-manufacturing to individual manufacturing. Creating one-offs at home, using tech Dale’s seen at MIT Media lab. Current magazines are “cargo magazines” just telling you where to buy, not how to manufacture.

Mark Frauenfelder up now. He said the idea reminded him of the old Forties Popular Science magazines, back then it was cheaper to build than buy. Then buying was cheaper, so we lost that knowledge. But it’s always been a lot of fun, especially when the roles may reverse again, at least for customised stuff.

So example spread they’re showing is of Kite Camera photography. Feature is a low-cost, $10 camera with a silly putty timer. Ebay as a parts supplier for everyone. (Looks good, although in the cutthroat British magazine industry, I can already hear Future and Dennis leaping like jackals on these scraps and sending their first Make rip-off to print before this talk even finishes. “Make Format”? “Slaphappy Magazine” “Dig”. Yeah, “Digger”. With a soft-porn centerfold section called “The Dirty Digger”)

Going for a HOWTO feel, just to improve the overall documentation of these products. It’s valuable to preserve the dead-ends too, so people can take off with those, so they’ll be sticking

Somebody from audience suggesting talking to Fry’s about “MAKE kits”. Popular Science used to offer “buy the plans” services (like Altair, or Nascom). Mark wants to include the complete plans in the magazine itself.

Question: are you going to have a section for quick hacks? Yes: mobile, home entertainment, etc. Guy asking says he’s got a lot of hacks that are too small even for Webpages.

Question: have you seen Readymade? They have a MacGyver challenge, similiar sensibility. Mark used to work with the Readymade publisher. They see it differently — Readymade not as geeky as Make is going to be. Mark likes a lot of Readymade, but Make involves technology. Followup question: what is your definition of technology then? Large and blurry — the spread of the tech metaphor to other areas.

Question: what about the legality issue. It’s an editorial decision, says Dale. In a sense, the Hacks book isn’t written for hackers, it’s written for people who can learn from hackers. But the grey areas move — burning CDs was a grey area a few years ago, and now? There’s a big difference between doing a hack, describing it, and then building it from the plans.

Dale: “we see the O’Reilly audience as a core, but we want expand beyond that.” Like reading National Geographic sometimes. It’s not just procedure, it’s also personal.

Question: what kind of licensing. Ooooh good question. Answer: it’s … complicated. We’re just seeking a non-exclusive license. When we work with photography, they have their own mad rights world. So without paying them, it would be hard to get them to shift to a creative commons world. My real goal is build a community with this. Questioner is curious because she’s Indian, and keen to take this stuff and use it in India.

Question: are you familiar Lindsay Publication books – they reprint old “How to Make Your Own Foundry”.

Comment from audience: woodworking magazines really good guide, as are cooking magazines.

One feature is how to do good project shots, which is deliberately so they get better pictures when people send stuff in.

Sorry, lost my note-taking ability then as I got into a discussion. Major bit of info: it’ll be 200 pages, quarterly. Subscription-driven, it sounds. Hybrid book/magazine: mook!

URL: http://make.oreilly.com/

notes on: edd dumbill on doap

Hi, confusing switches-of-first-person-and-third fans, and welcome back to my OSCON incoherentfest. You join us slamming into the Marriott, late enough to miss the redeye all-Dyson triple-slam keynote completely. Sorry. I ended up staying up too late trying to stop Ada bowling into Larry Wall’s legs, tipping him over Nat Torkington’s hotel balcony, and other acts of chaotic-god toddler action. So you get the first post-keynote session: Edd Dumbill’s (whose name I always misspell) DOAP talk.

Aim: to cut down the amount of work a software maintainer has to do to get the news out about their work. Too many project registries: Freshmeat, OSDir, GNOME software map, blah, blah, blah. Hard to keep up to date. Flip-side is if you’re trying to maintain your own registry, it’s hard to keep track.

Goals: it had to cope with internationalized descriptions. There had to be tools for the creation and consumption of these descriptions. It had to have interoperability with other Web metadata – FOAF, Dublic Core, RSS.

Use cases: easy importing of details into registries; exchange between registries; automatic configuration of resources – finding CVS repositories or bug trackers; and assisting packagers.

Tried to learn from recent metadata successes and failures. Dublin core is double-edged sword; mostly goodness, great documentation, raising awareness. Not done so well is that they underspecified in various way, so there are questions about how to use certain terms (what’s “author”? Name? E-mail address? URL?). RSS: very messy history, suffered from underspecification too. (Edd was involved in RSS 1.0, which was very notunderspecified, if i recall). ebXML is an electronic business vocabulary – boring, but they have schema and lots of documentation. HTML – hard to retrofit validation. Lessons Edd drew from this: docs, interop, schema, community.

XML or RDF, that is the question. Straightforward XML? Or RDF? XML comes in many flavours, though: well-formed, with an W3C XML schema (huge processing overhead; Edd doesn’t like), RELAX NG (feels a lot more lightweight, but has its own issues). RDF provides “webby data” – semantics as well as syntax. Edd likes RDF.

Surveyed existing work: Freshmeat, SourceForge, GNOME, KDE, Open Metadata Framework, Advogato. He here shows huge spreadsheet of relative features like mirror site lists, purchase links, demo site, license, etc. He thought the social relations between developers and projects was particularly important, because egoboo is so important. And screenshots! Must have screenshots!

Fields that weren’t anywhere else, but he stuck in: non-CVS repositories, wikis, more project roles: translator (woefully underrecognised), testers, etc. Spread the egoboo.

Biggest issue: how do you uniquely identify a project? Can’t use name, names change, names clash. How about a URL? But what URL? What happens if lose that domain? He picked homepage, but what if homepage moves? So Edd added “old homepage” property. If two DOAP descriptions share a homepage (either their current or their old homepage), then they’re the same project.

What about license? So that people can compare licenses in different projects, we give a unique URI for each license. He’s defined URI’s for common licenses. If you actually resolve those URIs you get an RDF file that points to the GNU site, etc. (Edd makes the argument that FSF might move their license, so he’s pointing to his own URIs. But what if Edd loses his domain or gets hit by a truck. Not sure this isn’t just adding an indirection, and not solving problem.)

Shows a simple DOAP file. Looks good, nice and simple: DOAP file pulls in FOAF and RDF namespaces to define a bunch of stuff. Looks easily createable with a template.

Tools: need a Creator, Viewer, and a Validator. Someone else wrote a DOAP-a-matic. There’s also a rel link for autodiscovering projects on Webpages. Someone else has written a firefox plugin that shows a colourful human-readable version of the DOAP data if a HTML page has a link to a DOAP file in it. Edd is writing a toolkit for validating, written in Mono.

Participants: OSDir.com and GNOME Software Map are already interested, looking to engage others. Needs ore tools, like autoconf and distutils, Makemaker to spit out info about the project they’re managing.

Q and As: Edd says he’s deliberately staying out of category discussions. You can add categories by just pointing to URI of the category — so you can point to Freshmeat categories, Debian tags, etc. And of course because it’s RDF, you can assert relationships between those categories.

2004-07-28

notes on: perl lightning talks, impressionistically rendered

Stumbling into the Perl Lightning Talks now. Randal Schwartz (looks like Randal. Certainly wearing Randal-like clothes. He’s the Hooter’s guy, right? I always get him and Tom Phoenix confused. Okay, definitely Randal.) Anyway, he’s written a CGI replacement that uses Class::Prototyped to create a proper MVC-style object interface for Web applications. The stub class implements a default Web app, and you just stick in your own methods which customise it. I wonder if this is how WebObjects works? They worked out how to structure it by looking at oodles of existing CGI apps.

I don’t know what the name of this class is (for I am an idiot), but it’lll be out on CPAN soonest. Look for the Hooters guy.

(Ed: The whole set-up is far better explained by Randal himself in this Linux Magazine article from Feb 2004.)

Perl is too slow! is the name of the next talk.

Perl’s compile cycle is slow. Mod_perl solves this. But what about other environments?

Why can’t we create a generic stub for any program?

This guy (known as anonymous for the purposes of these notes) saw Speedy CGI, tried to use that, but couldn’t get it to work for command line environment.

So he wrote pperl. It used to be a big backend that sat communicating via STDIN and STDOUT over a Unix domain socket. But no STDERR or weird files.

Richard Clamp from London.pm came to help. He’s a veeery scary Unix hacker who knows how to use send_fd() and recv_fd() to send file descriptors over Unix domain sockets. Yes, that scary.

Result: /usr/bin/pperl is about ten times faster than normal Perl

BUT THAT’S NOT GOOD ENOUGH!

“String matching in Perl is slow” (controversy!)

Well, it’s *naive*. Converting string matches into C speeds things up, but it’s still naive.

Aho Corasick algorithm is faster. 150 times faster.

Text::QSearch implements Aho Corasick algorithm, will be on CPAN (once lawyers look it over) in a few weeks.


The next guy (it’s all guys so far) doesn’t like CVS. Which is bad, because he maintains a lot of modules, so he gets a lot of patches. He wants a VCS system where everybody has access to the repository, but prevents complete madmen (apart from him) from trashing it.

There is this (GPL’d) config management system thing called Aegis. So under Aegis, you can declare a change, and put it into a task list. And someone else can pick up something and check out the data. When you commmit, though, you have to pass tests, and build: the sort of thing you have to bolt onto CVS.

Then after that it stays in a waiting room, and review the code. So Mr Module owner can check the code, kick it back or accept it.

He’s offering AEGIS accounts at … dammit. E-mail addresses are meant for projectors and wikis, not audio. Well, ask on #perl I imagine.


This guy’s name is Thomas, and he’s from Amsterdam. He faced a challenge that he couldn’t code his way out of. He went to the developing world, and taught people how to use open source code, but when these people went away to work on it, they weren’t online, so they couldn’t use it. Enter: NGO In A Box, which is prepackaged open source software for NGOs to use.

(Ed: Everybody really likes NGO in a box. Big applause, not least because it was the shortest of the talks.)


Next Perl talk I sacrificed to the God of tidying up the other descriptions. It was something big and clever and Microsofty that Boeing uses. Sorry.

Mark Jason Dominus can’t be here because he has a new baby girl! Ahhhh.

Andy Lester is up. He likes to test stuff. He’s here to talk about “prove” which is part of the Test harness. It is like make test, and is now in core Perl. It will *change how you think about tests*, he says.

Prove spits out better diagnostics than make test, so you can file better bug reports.

Test first and prove are best friends. (Not sure why, I guess because make test isn’t.)

(Ed: prove looks like a TestRunner for Perl. Which is cool, although I’m surprised Perl didn’t have one already.)

Check out more info at “prove –man” with latest Perl distribs, and http://qa.perl.org/

David Turner. Works for RMS. Starts by quoting the “spider in the hands of an angry Lord” preach. “Licensing is not theology – it’s rocket science”. He’s a fine preacher.

Parrot licensing will cast you all into hell, he tells the audience, and lo they are sore afraid. Go ye through the Parrot source, and stick there proper copyright notices. Don’t put “all rights reserved”, on fear of your mortal soul, because that the Lord RMS doth not think that it doth mean what thou thinkest it means. Put it instead under the GPL or the Artistic License. But don’t put it under the Artistic License, put it under the Clarified Artistic License, for the Artistic License as it stands is sorely artistic, and lo it is ambiguous in many areas. And Brad Kuhn did come down from on high and suggest that the CAL be the license for Perl6, and he spake truth, for ye will be sent to Hell if you do not heed him! For he is the prophet of RMS, and the one who is to come that is greater, who is known as Hurd, and whose todo list is legion. (I am paraphrasing a fair bit here)

Some copyright notices say “(c) The Perl Foundation”. You need to get signed declarations for copyright re-assignment, it’s the Law. Talk to the FSF and we’ll sort it out to you. Amen, brother!

Enough. I go find daughter so that I can chase her in giggling circles around the hotel.

notes on: mono 1.0

I turned up on time for Miguel’s Mono talk. Unfortunately, even though Miguel is the fastest talker in the world, he didn’t get very deeply into the cool, unknown stuff, so it’s mostly cool known stuff.

They just released Mono 1.0. It’s the result of three years of work. Mostly assembled by members of the community; roughly 300 people. According to the stats, they got a lot more code done than other projects — probably because it was easy to compartmentalise. Each class could be built in isolation.

What is Mono? It’s an open source implementation of .Net. It’s cross platform implementation of a virtual machine, an SDK, and a bunch of class libraries. Linux, of course, plus MacOS X, Solaris, etc, and Windows. Windows doesn’t need Mono, but fifty percent of the Mono contributors use Windows primarily. A lot of people are looking at a migration path away from Windows.

We support: C#, Java and Nemerle. In preview, VB.NET, Jscript, and Python.

Mono 1.0 doesn’t have Windows.Forms, EnterpriseServices or InstallationServices. But they do have Gtk# and Gnome.net, bindings for building desktop applications on Linux. Also they have a Cairo (graphical subsystem) substrate. Lot of third-party database support, Relax NG.

Documentation is lifted mostly from the ECMA spec (“We’re bad, but not as bad as most open source software”). The Documentation system has a “wiki” feature, so you can enter and upload contributions using the help system.

Mono is now the official Novell desktop development platform. We’re using it initially for new software, and for extending existing software (there’s APIs for using Mono as an extension system). Examples include: Beagle (a filing-system extension which does Spotlightlike metadata features, demo’ed it in Norway to 300 developers six hours before Steve Jobs demo’ed the same stuff in Tiger), Dashboard, F-Spot, Evolution 2. None of that is shipping, it’s for the next version of the desktop. Nat’s department is doing all the interesting desktop stuff. We’ve mixed up the kernel and desktop people to make sure that they’re not working in isolation.

Roll-outs: Voelcker uses Mono to run 400 servers with 150,000 users ported an ASP.NET application to Unix.

Reuse: we can run existing .Net apps written in C# or VB. Third party compilers in Eiffel, Ada, Fortran, C/C++ etc.

Java. We have a JIT that converts Java bytecodes into the .NET VM. You can run C# and Java code side-by-side. We use the GNU Classpath, which means we have the same limitations (no Swing, etc). Applications like Eclipse run out the box.

There are two stacks: there’s the ASP.NET/ADO.NET/Windows.Forms stack which is the “Microsoft Compatibility Libraries”. And then the rest of the code, which is free software running on both Windows and Linux. The Mozilla bindings don’t work on both, and the Gnome APIs don’t either. But everything else is cross-platform between Windows and *nix. We have a nice Rendezvous stack someone wrote at Novell.

Who develops Mono? Novell has twenty engineers working full-time on Mono, and 300 developers from open source community. Also help from “nameless embedded system vendors”, Mainsoft – a product that hooks to VisualStudio which lets you run ASP.NET stuff on J2EE servers, SourceGear.

Where are we going? Continue to improve Unix, Gnome, Cocoa. We’re building new things in .Net, iFolder 3 (multi-user, open source), Beagle (WinFS), Novell Dashboard.

Mono 1.2 – incremental update, debugger, Cocoa 1.0, Gtk# 1.2, Windows.Form

Mono 2.0 – ASP.Net, ADO.NET, System.Xml, Windows.Form 2.0

Currently: Lots of optimisations. SSA Partial Redundancy Elimination, cool ex-Mozilla SportsModel garbage collector.

Out of time!

notes on: crash course in database design: surviving when you don’t have a dba

Okay, now I’m late arriving at Dirk Elmendorf’s crash course in database design. We join him just after he’s explained why you should care about db design, and where to bury your DBA’s body when you’ve accidentally poisoned his coffee. I think.

So first steps: you have entities, and relationships. Entities are bits of data, relationships are connections between them. You don’t have to have relationships between every entity (he’s really seen databases that are like that).

Easiest example: a one-to-one relationship. A professor and a private office, say. So one professor has one office. Then there’s one-to-many. One classroom may have many classes occur in it. Many to one: a class has many classrooms. Then there’s the complicated one, the many to many: if a class was repeated in a bunch of classes.

SQL doesn’t handle many-to-many. So you have to insert something in the middle, so you have a many-to-one, and then a one-to-many. So in this case, you create a new entity: a class schedule. A class relates to a single time and data, and a classroom relates to a single date and time.

(Editorial confession at this point: I learn everything I know about database design from futzing around with the Access visual designer. Relational DBs give me the heebie-geebiesSo I’m now hitting the extent of my knowledge. I may be mangling what Dirk is saying here- d.)

Building your database: use consistent coding standards (you putzes!). Single and plural table neames, upper case, underscore, camelcaps, special table prefixes — e.g ref_units, where “ref” is a prefix for contant reference values.

Normalisation! Hooray! Normalisation is a set of rules to designing schmea. It helps you reduce redundancy. Redundancy is bad because it causes errors, and wastes resources. (Ed: Also I think there’s a commandment against it in the Bible. Or there was until they refactored them).

It eliminates database inconsistencies -poorly laid out data can provide false statements.

Normalization – first Normal form. The easiest one: all records have to save the same “shape” – the same rows, the same data. One way to cheat is to have a fake array in your table, like “author_name1”, “author_name2”, “author_name3”. What happens when you have more than three authors? What do you do when you delete an author. Author3 sometimes becomes a comma-separated field of the rest of the authors. Yuk. Address1,2,3 is very common – but databases can store newlines now.

Second normal form. Row can’t have partial key dependences. So having a field in the employee table marked “office_location”, then you’re mixing ideas, because if you delete all the employees, your office_location will disappear too. Move it out to its own table.

Third normal form. One non-key field cannot be a fact about another non-key field. So you can’t have a book table, with a publisher *and* a publisher’s address. You need to break that out again – a separate publisher table.

Fourth normal form. A record should not contain two or more multi-valued facts which are independent. So a class table that has “room” and “professor”. You really want to move those out into their own classes, because those facts aren’t related.

(Ed: bascally, databases seem to crave to become triplets. I can see why people turn to RDF religion after this.)

Fifth normal form: information cannnot be represented by several smaller record types. So basically, once you’ve gone for the four normal forms, stop breaking out the bloody tables, because you’re GOING TOO FAR.

Normalization – you can have too much of a good thing. Start at a normalized database and work backwards as you need to optimize.

Practical tips: ideally, put data checking into a central library, and then make sure all data is run through it before it enters the database.

Unfortunately, the usual case is that you don’t have enough control over access to the database – the db is used by multiple applications, languages, and the integrity of the data is more important than performance. So you need to put data-checking inside the db.

A primary key is a non-null unique identifier for a row, a composite key is a collection of fields that give you a non-null unique identifies. A foreign key is a primary key that’s stored in a different table. A foreign key constraint means that a foreign key has to appear as a primary key in another table.

Cascading updates and deletes. Cascade allows you to handle foreign tables that your application may not even be aware of. ON UPDATE CASCADE, handles updating of the foreign key. On delete cascade deletes the row that is referencing a foreign key which is being deleted from the primary table of the foreign key. ON DELETE CASCADE is daaaaangerous. You might want it to delete logs, but you really don’t want all records of this person disappearing when you delete something. ON DELETE CASCADE SET NULL – so you can just make individual record values to disappear.

Column restaraints. Default to NOT NULL, because NULL can cause real problems with sorting, etc. UNIQUE – you can use this for multiple columns.

CHECK – stuff like CHECK( age > 0). You’re looking for data validity, but you’re not putting business logic in your database. You want to leave that to a proper DBA.

Triggers! Advanced data checking, or to scrub data – automatically lowercasing stuff. Can also handle more advanced clean up – so deleting a single table can cause a trigger to clean other tables that are related and log the event to a log table. But triggers are daaangerous for programmers, because they’re hard to maintain in the development cycle, and invisible when debugging.

Dirk’s DBA told him to talk more about indexes. Indexes are cheat-sheets for the database to improve performance. But you need to pay attention to actual queries to figure where they are needed. Index a lot, but remove the ones that are not being used. Don’t bother indexing a column which has a small amount of possible values for a lot of rows. Boolean/value ids that have a short list.

Conclusions: DB design isn’t new or cutting edge, so there’s a ton of literature out there to help you learn more about database.

Just because you don’t have are not a DBA doesn’t mean you should build on top of a poorly designed database.