Wednesday, June 29, 2016

The Future of the Web

Recently the web—via Twitter—erupted in short-form statements that soon made it clear that buttons had been pushed, sides taken, and feelings felt. How many feels? All the feels. Some rash words may have been said.

But that’s Twitter for you.

It began somewhat innocuously off-Twitter, with a very reasonable X-Men-themed post by Brian Kardell (one of the authors of the Extensible Web Manifesto). Brian suggests that the way forward is by opening up (via JavaScript) some low-level features that have traditionally been welded shut in the browser. This gives web developers and designers—authors, in the parlance of web standards—the ability to prototype future native browser features (for example, by creating custom elements).

If you’ve been following all the talk about web components and the shadow DOM of late, this will sound familiar. The idea is to make standards-making a more rapid, iterative, bottom-up process; if authors have the tools to prototype their own solutions or features (poly- and prolly-fills), then the best of these solutions will ultimately rise to the top and make their way into the native browser environments.

This sounds empowering, collaborative—very much in the spirit of the web.

And, in fact, everything seemed well on the World Wide Web until this string of tweets by Alex Russell, and then this other string of tweets. At which point everyone on the web sort of went bananas.

Doomsday scenarios were proclaimed; shadowy plots implied; curt, sweeping ideological statements made. In short, it was the kind of shit-show you might expect from a touchy, nuanced subject being introduced on Twitter.

But why is it even touchy? Doesn’t it just sound kind of great?

Oh wait JavaScript

Whenever you talk about JavaScript as anything other than an optional interaction layer, folks seem to gather into two big groups.

On the Extensible Web side, we can see the people who think JavaScript is the way forward for the web. And there’s some historical precedent for that. When Brendan Eich created JavaScript, he was aware that he was putting it all together in a hurry, and that he would get things wrong. He wanted JavaScript to be the escape hatch by which others could improve his work (and fix what he got wrong). Taken one step further, JavaScript gives us the ability to extend the web beyond where it currently is. And that, really, is what the Extensible Web Manifesto folks are looking to do.

The web needs to compete with native apps, they assert. And until we get what we need natively in the browser, we can fake it with JavaScript. Much of this approach is encapsulated in the idea of progressive web apps (offline access, tab access, file system access, a spot on the home screen)—giving the web, as Alex Russell puts it, a fair fight.

On the other side of things, in the progressive enhancement camp, we get folks that are worried these approaches will leave some users in the dust. This is epitomized by the “what about users with no JavaScript” argument. This polarizing question—though not the entire issue by far—gets at the heart of the disagreement.

For the Extensible Web folks, it feels like we’re holding the whole web back for a tiny minority of users. For the Progressive Enhancement folks, it’s akin to throwing out accessibility—cruelly denying access to a subset of (quite possibly disadvantaged) users.


During all this hubbub, Jeremy Keith, one of the most prominent torchbearers for progressive enhancement, reminded us that nothing is absolute. He suggests that—as always—the answer is “it depends.” Now this should be pretty obvious to anyone who’s spent a few minutes in the real world doing just about anything. And yet, at the drop of a tweet, we all seem to forget it.

So if we can all take a breath and rein in our feelings for a second, how might we better frame this whole concept of moving the web forward? Because from where I’m sitting, we’re all actually on the same side.

History and repetition

To better understand the bigger picture about the future of the web, it’s useful (as usual) to look back at its past. Since the very beginning of the web, there have been disagreements about how best to proceed. Marc Andreessen and Tim Berners-Lee famously disagreed about the IMG tag. Tim didn’t get his way, Marc implemented IMG in Mosaic as he saw fit, and we all know how things spun out from there. It wasn’t perfect, but a choice had to be made and it did the job. History suggests that IMG did its job fairly well.

A pattern of hacking our way to the better solution becomes evident when you follow the trajectory of the web’s development.

In the 1990’s, webmasters and designers wanted layout like they were used to in print. They wanted columns, dammit. David Siegel formalized the whole tables-and-spacer-GIFs approach in his wildly popular book Creating Killer Web Sites. And thus, the web was flooded with both design innovation and loads of un-semantic markup. Which we now know is bad. But those were the tools that were available, and they allowed us to express our needs at the time. Life, as they say…finds a way.

And when CSS layout came along, guess what it used as a model for the kinds of layout techniques we needed? That’s right: tables.

While we’re at it, how about Flash? As with tables, I’m imagining resounding “boos” from the audience. “Boo, Flash!” But if Flash was so terrible, why did we end up with a web full of Flash sites? I’ll tell you why: video, audio, animation, and cross-browser consistency.

In 1999? Damn straight I want a Flash site. Once authors got their hands on a tool that let them do all those incredible things, they brought the world of web design into a new era of innovation and experimentation.

But again with the lack of semantics, linkability, and interoperability. And while we were at it, with the tossing out of an open, copyright-free platform. Whoops.

It wasn’t long, though, before the native web had to sit up and take notice. Largely because of what authors expressed through Flash, we ended up with things like HTML5, Ajax, SVGs, and CSS3 animations. We knew the outcomes we wanted, and the web just needed to evolve to give us a better solution than Flash.

In short: to get where we need to go, we have to do it wrong first.

Making it up as we go along

We authors express our needs with the tools available to help model what we really need at that moment. Best practices and healthy debate are a part of that. But please, don’t let the sort of emotions we attach to politics and religion stop you from moving forward, however messily. Talk about it? Yes. But at a certain point we all need to shut our traps and go build some stuff. Build it the way you think it should be built. And if it’s good—really good—everyone will see your point.

If I said to you, “I want you to become a really great developer—but you’re not allowed to be a bad developer first,” you’d say I was crazy. So why would we say the same thing about building the web?

We need to try building things. Probably, at first, bad things. But the lessons learned while building those “bad” projects point the way to the better version that comes next. Together we can shuffle toward a better way, taking steps forward, back, and sometimes sideways. But history tells us that we do get there.

The web is a mess. It is, like its creators, imperfect. It’s the most human of mediums. And that messiness, that fluidly shifting imperfection, is why it’s survived this long. It makes it adaptable to our quickly-shifting times.

As we try to extend the web, we may move backward at the same time. And that’s OK. That imperfect sort of progress is how the web ever got anywhere at all. And it’s how it will get where we’re headed next.

Context is everything

One thing that needs to be considered when we’re experimenting (and building things that will likely be kind of bad) is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? The file sizes inherent in the product pretty much exclude slow networks already, so maybe that condition can go out the window there, too.

Context, as usual, is everything. There needs to be realistic assessment of the risk of exclusion against the potential gains of trying new technologies and approaches. We’re already doing this, anyway. Show me a perfectly progressively enhanced, perfectly accessible, perfectly performant project and I’ll show you a company that never ships. We do our best within the constraints we have. We weigh potential risks and benefits. And then we build stuff and assess how well it went; we learn and improve.

When a new approach we’re trying might have aspects that are harmful to some users, it’s good to raise a red flag. So when we see issues with one another’s approaches, let’s talk about how we can fix those problems without throwing out the progress that’s been made. Let’s see how we can bring greater experiences to the web without leaving users in the dust.

If we can continue to work together and consciously balance these dual impulses—pushing the boundaries of the web while keeping it open and accessible to everyone—we’ll know we’re on the right track, even if it’s sometimes a circuitous or befuddling one. Even if sometimes it’s kind of bad. Because that’s the only way I know to get to good.



from A List Apart http://ift.tt/28PdgSc
via IFTTT

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.