iDont, or Why The iPhone Is Evil

Tim Bray has been hired by Google as a “Developer Advocate.” On his personal blog, he describes what this means and why he is excited to have the job. It’s a good read, and definitely recommended. In particular, his take on the iPhone is dead-on: it’s not good for developers, definitely not good for the Web, and not even good for consumers, in the long run. Here is his explanation (excerpted from his blog post) of why the Android is better than the iPhone:

The iPhone vision of the mobile Internet’s future omits controversy, sex, and freedom, but includes strict limits on who can know what and who can say what. It’s a sterile Disney-fied walled garden surrounded by sharp-toothed lawyers. The people who create the apps serve at the landlord’s pleasure and fear his anger.

I hate it.

I hate it even though the iPhone hardware and software are great, because freedom’s not just another word for anything, nor is it an optional ingredient.

The big thing about the Web isn’t the technology, it’s that it’s the first-ever platform without a vendor (credit for first pointing this out goes to Dave Winer). From that follows almost everything that matters, and it matters a lot now, to a huge number of people. It’s the only kind of platform I want to help build.

Apple apparently thinks you can have the benefits of the Internet while at the same time controlling what programs can be run and what parts of the stack can be accessed and what developers can say to each other.

I think they’re wrong and see this job as a chance to help prove it.

Amen, Brother Bray. Google may or may not be able to keep evil out completely, but they’ve obviously made a good hire. Go get ’em.

Posted in Coding, Current Events, Digital, Digital Citizenship, Internet | Leave a comment

CD bit size calculations

A question occurred to me the other day — how big (physically) are the bits on a CD?

Measuring a CD’s inner and outer diameters, I got about 119mm for the outer diameter and 44mm for the inner diameter. Multiplying these by π*r2, I got about 38,400mm2 for the area occupied by the data. According to Wikipedia, a CD-R has 867,041,280 bytes of raw data (including the error-correction code and everything). Multiplied by 8, this is 6,936,330,240 bits.

Dividing, I got about 180,600 bits per mm2. Taking the square root, it turns out that about 425 bits placed end-to-end would be 1 millimeter long. Wow.

Imagine DVD-Rs (with 6-7 times the capacity, for single-layer) or BD-R discs (with about 36 times the capacity, for single-layer). 2,400 bits to the linear millimeter? Dayumn.

I took the CD-R and looked at it using one of the stereo microscopes we have in the lab — at full magnification, I was just able to see some wavy structure. You’d need something a bit more expensive (this was just a standard $250ish lab ‘scope) to see the actual bit patterns, I think.

On a somewhat- related note, happy Pi Day, everyone!

Posted in Digital, Math | Leave a comment

Software Engineering

Once again, Scott Adams has it right:

I never really understood most of the methodologies behind “software engineering.” Yes, a definite code-review process and overall plan is certainly needed for projects of any reasonable size — but to me, the main takeaway from the “Software Engineering” course I took was that there is a *lot* of management going on, and it doesn’t seem to be especially effective. We clearly still have not eliminated software bugs, and yet we were told in the course that twenty or thirty lines of code per day is optimistic, for large projects.

This seems ridiculously inefficient to me. If a project is broken down into manageable modules, coding should scale reasonably well across project sizes and across varying numbers of coders working on the project (assuming that one coder works on one module when writing.)

What seems even more insane is the idea of “pair programming” — where two programmers work in close cooperation as a team, with one computer. One programmer “drives” (writing the code) while the other “rides shotgun” and apparently just watches the first one code — presumably to point out mistakes as they are made and to provide running commentary. How can this possibly be more efficient than providing each programmer with his or her own computer, even if it’s a ten-year-old eMachines piece of crap? A cheap new PC costs less than $500 these days — less than a week’s salary for a programmer.

How about this for a methodology:

* The software group meets to nail down specifications for the project, decide on a language or languages, and writes these specifications down. All design is high-level at this point, and the focus is on whether this can be done in the time allowed and within the proposed budget. The client and lead developer share responsibility for the final version of this document. Input is welcomed from all developers, but the client and lead developer sign off on the master plan. (This part should take no more than a day for most projects, and probably an hour or two for many. For something huge like a new version of an OS, perhaps a few weeks at most.)

* Once the master plan is in place, the developers meet to break the coding down into modules. Each module is described in a module contract document, specifying what language or languages will be used to run it, maximum resources allowed, the deadline for module completion, and (most importantly) what values the module will be passed and what values it will pass to other modules, along with parameters. (For instance, a “square root” module would be specified to accept a single parameter of type double and return a single parameter also of type double.) Any possible error conditions should also be enumerated, along with how to handle them. For the square root function, if it is passed a negative number as input, should it return a zero, somehow have the ability to return a complex number, or call an error routine?

* Once the contract for each module is written, it is assigned to a team, ideally comprising one programmer. This programmer (or team led by one individual responsible for the module) writes the code and ensures that it meets the specifications.

* Once the module is written, it is sent to one or more other programmers (perhaps anonymously) for testing, along with its specification document. They note its performance on all test cases — or choose a representative set of test cases that they feel has the best chance of turning up any errors.

* The “main” program is handled as a module like any other, with its own specification contract document.

This methodology, I think, would help cut through a lot of the red tape associated with “software engineering” while providing a way to ensure that the code produced is sound. It also avoids the “extreme programming” idea of having programmers working in teams, which to me sounds a lot like trying to duct-tape cats together to make a better mouse-catching machine.

Posted in Coding, Digital | Leave a comment

If it ain’t broke — tweak it!

2.66GHz to 3.57GHz, so far — running air cooling and staying within voltage specs. Apparently all the hype about the Core i7’s overclockability isn’t just hype, after all. Wow.

CPUID screenshot of system running at 3.57GHz

Posted in Digital, System Administration | Leave a comment