My CSS Wishlist (Fall 2013)

CSS has come a long way in the past few years, with features like media queries, 3D transforms, multiple backgrounds, column layouts, and more finally becoming usable in real-world applications. This is in part due to advancing standards, of course, but moreso due to the past couple of IE releases making great strides to catch up with the competition. However, this doesn’t mean all is well in CSS-land; there are still a handful of incredibly useful CSS features I’m eagerly awaiting. In no particular order:

Flexbox support in Firefox

Flexbox provides a new way to arrange elements within pages. Traditionally, layout has been mostly done using floats to stick elements to one side or the other of the parent element, with many layers of floated elements often being necessary for complex sites. While this is certainly better than table-based layouts, this style is needlessly verbose and brittle; if one element happens to overflow the available area for the float, the entire layout can come crashing down, leaving the entire page mangled. Although this risk can be mitigated with careful design, flexbox is a much more robust tool for complex layouts.

Unfortunately, Firefox doesn’t support the latest revision of flexbox. It is the only major browser (not counting the Android Browser, but I suspect most Android users use Chrome anyways) not to support flexbox; even IE11 supports it! Worse still, there is no suitable polyfill for flexbox as of yet. Some work ecists on creating a polyfill, but it unfortunately isn’t ready for real-world use yet. Thankfully, the Mozilla team is working on this, and I’ve got my fingers crossed that proper flexbox support will make it into Firefox in early 2014. Until then, the best we can do is stage changes and explore flexbox within the confines of Chrome and IE.

Independent Axes for fixed & absolute Positioning

This may sound like a niche feature, but the web is in fact littered with questions from people trying to get fixed positioning to cooperate with zooming, usually in the context of mobile browsers where zooming into text is incredibly common. The infamous ppk of quirksmode even advocates adding a new device-fixed position to work around this issue.

My hypothetical solution for this situation is a bit different: just allow an element to be fixed on one axis and absolute on the other. In the case of my sidebar, I want it to be fixed in the vertical direction and absolute in the horizontal direction. This way it would always be pinned to the top of the viewport, but horizontal scrolling would put it out of view, since absolute positioning would pin it to side of the document, not the viewport. Zooming would also work properly, since the sidebar would be able to grow outside the viewport rather than growing within the viewport to obscure content. This method is inferior to ppk’s solution for the case of a fixed full-width header, however.

:nth-descendant Pseudo-classes

There is currently no way to select an element based on depth in CSS. Sure, you can do things like #spam > div > div > div to get at an element 3 levels below, but this is horribly cumbersome, terrible for selector performance, and doesn’t scale to arbitrary depths. Examples of where this matters are deep forum quotes or email threads which are heavily nested. It might be nice to mark each reply level with a slightly different color for legibility purposes. With current CSS selectors, each level of depth must be manually defined, and there’s no way to make the highlighting start over at a certain depth; everything must be directly specified. With these descendant selectors, we might do something like this to get a repeating rainbow pattern where every descendant is the next color:

.quote:nth-descendant(6n+1) { color: red; }
.quote:nth-descendant(6n+2) { color: orange; }
.quote:nth-descendant(6n+3) { color: yellow; }
.quote:nth-descendant(6n+4) { color: green; }
.quote:nth-descendant(6n+5) { color: blue; }
.quote:nth-descendant(6n+6) { color: violet; }

This would repeat to any depth without repetitive CSS. While this example is a somewhat special case, similar uses exist anytime elements might nest to an unpredictable depth. This idea was suggested a decade ago with a few extra pseudo-classes, but sadly never seems to have caught on.

::marker Pseudo-elements for List Styles

Less game-changing than the rest of these features, but being able to directly select and style list bullets would be wonderful, rather than all the kludges and alternatives currently used like wrapper elements to style bullets and content separately or altogether removing bullets and re-adding them via ::before. This is actually a part of CSS3, along with features like counters and list-style-type which are actually implemented, but unfortunately no major browser currently supports ::marker. Can I Use doesn’t even seem to have a section for it, so this appears to be another hugely useful feature left by the wayside for now.

Don't steal my middle clicks

I’ve been seeing this far too often. At first I only noticed it on sites that were trying to be too flashy at the expense of usability. Then I saw it on YouTube and realized that there’re plenty of bright people who’ve had this fly way over their heads.

It shouldn’t be this hard: When users middle-click or control-click a link, they expect the page to load in a new tab. When you deny users this browser-provided feature which works everywhere else on the web, they are left with only a few options. In order of highest likeliness:

  • Put up with your awful design: “Oh well, now I have to click back to get to the index.”
  • Do it the hard way: “At least I can still right-click > Open in new tab.”
  • Sic NoScript on you: “Just another kid playing with Javascript.”
  • Leave your site: Really though, nobody’s going anywhere over this.

No, your users aren’t too stupid to use tabs. No, it’s not going to overload your server. No, removing expected, useful, and unobstrusive features is not clean design (unless you’re Apple, of course). No matter what contrived excuses you can muster up, you are ultimately spitting in the face of the rest of the web. There’s a reason we don’t block right-click menus, add strange styles to links, or abuse imagemaps anymore: everyone finally realized that the web is a much nicer place when we use a standard interface and stop reinventing the wheel.

I expect some of this coming from people attempting to simply create an attractive interface, but I never expected Google to essentially remove such a basic navigation feature from one their core products.

"Standard" Data Formats

The last version of this site existed from around late 2008 to late 2010. This was the age when I still used PHP for anything more than masochism or money. I’ve been meaning to put together a site ever since, but only just now cared enough to actually do it. Tonight I grabbed the database dump from the old server and decided to try and import my old content, which was neither very large nor very good, but parts of it were neat. This is where the pain begins.

There’re two lessons I’ve learned from this the hard way:

  • Don’t unnecessarily transform your data if you can help it; input is most accessible and flexible when left unaltered.
  • If you must do this, make sure you’re using a standard data format (JSON, XML, YAML, etc). A language-specific serialization or an implementation that slightly deviates from spec will lead to tears.

I can’t get at the data by any normal means. My design choices were questionable, but it wasn’t anything too horrible: I had mod_php running in Apache connecting to a PostgreSQL database where the posts were stored. In my laziness, I never sanitized the data in an acceptable way, I just ran base64_encode over the text and slammed it in the database. That’s an effective way to prevent injections, sure, but is horrible in just about every other way, from storage size to indexing to overall complexity. There’s a much better, even simpler method – parameterization – but then I was 17 at the time, living fast and dying hard. This was the first problem, there was just no reason to have done this in the first place.

So I import the database dump, extract the data from the table, and all is going well. There’s a little console wizardry involving cut to get the right fields, then just pipe it through base64 -d.

base64: invalid input

Hmm, that’s strange. I try it with -i (--ignore-garbage) and get the same result. Copypasting to some web service yields more gibberish, so I write a little bit of Python and discover that its base64 library can’t grok it either. So now I’m in trouble, but surely if PHP wrote this data, it can read it, right? Right? A couple minutes later, I’m trying to decode it with the latest PHP, and it runs, but gives me the same gibberish from earlier. There’s a couple strings I can make out, but most of it is garbage, typical of the character boundaries in base64 getting thrown off. Versions 5.3 and 5.2 give me the same behavior. I was confident that PostgreSQL wasn’t breaking anything because I know the backup was good, Postgres 9.x on a default install is pretty much the same as what the dump came from, and I trust it far more than PHP. I ended up having to go back to that exact PHP version to recover the data. At the very least, if you have to do some bad encoding scheme (just don’t), make sure you’re not using some special dialect of the standard.

THIS. IS. WEB SCALE.

As far as a homepage goes, I’ve been down for months.

See, the problem is that I’m too cheap. The only hosting company that tickles my fancy at the moment is Linode, but their cheapest offering is twenty bucks, which equals many pounds of delicious trailmix. It’s not really expensive, but I can’t justify that for a little blog which historically gets like two hits a year. 512MB RAM, 20GB Storage, and 200GB bandwidth. More than I need for this, but not enough to do anything really fun with.

On the other hand, I like things with my name on them. As a programmer, your innards do a strange dance when you have to tell someone, “yeah, here, let me give you my yahoo address.” An AOL or Lycos address could almost be funny, but Yahoo, MSN, and even GMail advertise an unhealthy lack of self-respect for someone asserting their ability to Make Things Go on the web. So I decided to do something about it, while keeping costs to a minimum.

First of all, you need a domain. You can actually go buy one, or get a free one from DynDNS or a similar company. Keep in mind that you need to be able to edit A and MX records, at a minimum, which every paid domain and most free ones should allow. Again, things like jacob.myhomeserver.com embarass me, so I just bought this domain. It’s the only thing here that’s not free, so I splurged. Again, pick a free one if you don’t care.

Next comes the email. You can do this through Google, but first you have to verify that you own the domain. If you have permissions to edit TXT records on your domain, that is by far the simplest option. Otherwise, come back to this step after your web server is running and use one of the other methods. Then you set your MX record to “ASPMX.L.GOOGLE.COM” and create an email account for your site in Google’s console. You’re not forced to use the GMail interface if you hate it; just find your POP and SMTP settings and use whatever client you like.

Update: As of 6 December 2012, Google no longer offers free email services. Previously created services will still be free, but this doesn’t work for new domains anymore.

Finally, my current setup ends with a web server. This was the part that had me hung up the longest, as I couldn’t find many ways to hook the domain up with free services. Then this weekend, while fiddling around on GitHub, I found their Pages feature. At its core, this is unimpressive: it allows you to host static files at username.github.com. Optionally, you can template the site with Jekyll, which has somewhat limited but useful capabilities. What makes this awesome, though, is the section hidden away at the bottom called “Custom Domains.” This allows you to set your DNS A record on your domain to point to GitHub’s server, which will then serve your username.github.com site when the domain is visited. It’s essentially name-based virtualhosts, but on a very large scale. Your site is stored in a username.github.com repository, where you can put your raw files or Jekyll sources. The only thing you need to do is add a CNAME file containing the name of your domain that’s being forwarded, so GitHub knows where to send requests when somebody asks for yoursite.com.

All of this would be mediocre at best, except for that this site is nearly impossible to take down. That $20 Linode server, however flexible it may be, is going to choke if you happen to hit Slashdot or something. While I suspect there must be some sort of rate-limiting in place, GitHub has plenty of horsepower and bandwidth to push out site content. It also knows everything it could need with regard to caching. The content changes when commits are pushed, and it rebuilds the site. We can still do any frilly effects or addons we like if we want to use Javascript, and the lack of server-side processing doesn’t really matter for this kind of site. I’m going to miss being able to serve X-Bender headers, but other than that I’m not missing much.

How not to upgrade a server

It took quite a while for PostgreSQL 9 to be packaged for Arch, so I forgot about it for awhile. I noticed there was a 9.0 package available today. Since I’ve been more involved with 21 units of class and 40 hours of work than anything else, I neglected to upgrade my system for several months.

This all came back to bite me today. I wasn’t quite dumb enough to blindly upgrade Postgres, just almost. I shut down the database and copied the data directory, expecting the pg_upgrade script to do the conversion. I chose the full upgrade with “pacman -Syu” (bad idea) and blindly accepted the list of packages, since I had already prepared the database. It turns out pg_upgrade wants the binaries of both versions as well as the data, so… it didn’t work. Panic mode. I downgrade to 8.4, whip out a SQL dump, and upgrade again. Then I repopulated the new database with issue. Standard dump and restore, no problem.

So I check permissions, start up Apache, and get… errors. Lots of them, mostly telling me my code has syntax errors. I poke around for a good long while, then it hits me: Arch made Python 3 the default interpreter last month. My code is all written for 2.x. Oh. Looking for the quick fix, I patch up the shebang lines to /usr/bin/python2. Oh look. New errors! Something about the “pg” module. When I started this, I had used the pygresql module for database connectivity. It’s really not a great module, I didn’t know about DB-API at the time. Either when upgrading the database or Python (probably Python), the module was lost. It was an AUR package, so I had to go grab the source again. It failed to build, complaining about my Python version. I poked around, and the build script was, of course, calling whatever Python version happened to be the system default. I messed with the source and PKGBUILD but couldn’t get makepkg to use Python 2. I finally broke down and manually installed it. This worked fine, and so it’s up again.

Besides the tricky issue with multiple versions, I shouldn’t have upgraded that many things at once. I’m pretty sick of using pygresql, though, so this is all the more encouragement to replace it with something DB-API compliant. Free time is starting to trickle back into life, so I’ll probably do it soon.

EDIT: I’ve since fixed the PKGBUILD on the AUR, although the pygresql FTP server seems to be down. It’s still on pypi, though.