TAG: Web 2.0

Alex Barnett and his Shortening Tail

Alex Barnett writes: How RSS thickened my Long Tail. He wonders if RSS and other Web 2.0 aggregaton technologies can equalize page views over the long term, making the Long Tail a bit shorter.

Writing Semantic Markup

Digital Web Magazine has published Writing Semantic Markup, Richard MacManus and I’s latest article in the Web 2.0 Design column.

I had the writing duties on this one, and it wasn’t easy. What I tried to do was to use a relatively innocuous definition of “semantic” and expand on it to show how we might be writing markup going forward. I also had to balance the idea that XHTML had semantic elements but wasn’t really fulfilling that purpose, for better or worse.

Let me know what you think.

The Long Tail and Web 2.0

Ever since his excellent Long Tail article was published in Wired last November, I’ve been following Chris Anderson’s writing over at the Long Tail blog. It’s becoming an invaluable resource for understanding today’s economics. The Long Tail is about focusing on the less popular content that previously couldn’t be accessed because of some physical limitation: […]

Continue Reading: The Long Tail and Web 2.0

Microsoft could take Huge Blow from Open Data

David Weinberger points to a potentially explosive article in the Financial Times. Here’s an excerpt:

The state of Massachusetts has laid out a plan to switch all its workers away from Microsoft’s Word, Excel and other desktop software applications, delivering what would be one of the most significant setbacks to the software company’s battle against open source software in its home market.

The state said on Wednesday that all electronic documents “created and saved” by state employees would have to be based on open formats, with the switch to start at the beginning of 2007.

Documents created using Microsoft’s Office software are produced in formats that are controlled by the Microsoft, making them inelligible. In a paper laying out its future technology strategy on Wednesday, the state also specified only two document types that could be used in the future – OpenDocument, which is used in open source applications like Open Office, and PDF, a widely used standard for electronic documents.

The switch to open formats like these was needed to ensure that the state could guarantee that citizens could open and read electronic documents in the future, according to the state – something that was not possible using closed formats.

This suggests that at least one state (MA) is considering moving to open formats for all data. This is so Web 2.0, where open data is king and public access is necessary, not just useful, as government agencies are required to offer much of their content to everyone.

Bottoms-Up Semantics by Agile XML

On XML.com, Micah Dubinko summarizes some interesting conversations surrounding agile development and XML.

I think there is something to the “agile development” idea. It’s kind of like bottoms-up, instead of top-down.

The top-down approach, of course, is the Semantic Web. In particular, the technology RDF. RDF was created for knowledge representation, and if you scroll down a bit on the Semantic Web roadmap you’ll come to a graph that shows you an idea of how that works. You can make assertions about things, using subjects, verbs, and objects.

The problem is that this is over the heads of most people, including myself. I think it is wonderfully interesting, but I couldn’t build a system with it. I wonder if this is the general feeling…for one thing we have no great application showing the value of this. I think we will eventually, but developers often need to see the end-goal before something really catches on.

So, in the meantime we’ll use agile formats that slowly build toward the vision outlined by the Tim Berners-Lee and the Semantic Web folks. We’ll use Relax NG over XML Schema, and RSS over XHTML. How many of you browse nowadays without seeing any XHTML documents, other than the snippets embedded in an RSS file?

Jeff Jarvis: Who wants to own content?

Jeff Jarvis writes a passionate post about the hazards of being a content owner in Web 2.0: he says that content is so easily created now (something Richard and I pointed to in Web 2.0 for Designers), that it actually transfers the value away from owning it. Instead of being a content owner, he says, companies should instead want to be owners of trust:

“So don’t own the content. Help people make and find and remake and recommend and save the content they want. Don’t own the distribution. Gain the trust of the people to help them use whatever distribution and medium they like to find what they want.”

Web 2.0 is Not About Technology, Its About Sharing Information

I’ve been having interesting conversations lately about Web 2.0. As I’ve written before, many folks feel like it is a buzzword, and I completely understand that. I hate buzzwords, too. The conversations usually center around the impression that Web 2.0 is technology-based, and that nothing has really changed in technology, so Web 2.0 is nothing […]

Continue Reading: Web 2.0 is Not About Technology, Its About Sharing Information

Restrictive APIs

After writing a Greasemonkey script accessing Flickr’s photo API, Daniel Kim thinks the API is too restrictive for his remixing needs. He writes:

“Another point to consider is that as these mash-ups get more sophisticated they will no longer be pure mash-ups. Instead of merely exploiting existing relationships between data in different web sites, they will allow for the creation and storage of new relationships amongst data that is globally distributed across the web. These applications will need to have write access to their own databases, built on DBMS’s designed for the web.”

His whole post is Web Databases vs. Web Services/API’s. It’s a long one, full of lots of interesting points.

Kottke on Web as OS

Jason Kottke has some interesting ideas about the Web as OS.

He thinks the setup will include three pieces:

  • Browsers (like we have now)
  • Web apps like Gmail, Flickr, and Yahoo 360 (like we have now)
  • Local web servers that deliver local content in the same way that we get web content (we don’t have these quite yet)

In other words, he’s thinking that we’ll add APIs (programming interfaces) to our local content that will effectively make it equal to web content, so much so that we might not know where our data is coming from.

If you read Bokardo at least a little bit, you’ll recognize the two kinds of interfaces involved. APIs (Application Programming Interfaces) for the content servers (both web and local), and an AIIs (Application Interaction Interfaces) for the browser.

Interface Remixers will Pay for Privilege of APIs

Jonathan Boutelle brings up an interesting point after attending the BayCHI Web2.0 panel the other day: the Web 2.0 companies heavily promoting their APIs (Technorati, Flickr, Google) are glad to have developers create interesting new interfaces out of them…unless you want to make money from that interface. This discussion is just the tip of the […]

Continue Reading: Interface Remixers will Pay for Privilege of APIs

« Previous Entries | Next Entries »