The Gumtree Model

Gumtree, first established in March 2000, came into existence at the height of the dot com boom. A website, which by origin was created to house classified ads for Londers, quickly expanded worldwide.  Close to home, Gumtree is the largest free classified website in Australia with over 3 million unique visitors every month and approximately 1 million ads live at any one time.

Gumtree provides an excellent example of O’Reilly’s final core pattern of Web 2,0., Lightweight Models & Cost-Effective Scalability. So let’s apply the best practices to Gumtree’s business model.

‘Scale with demand’ & ‘Syndicate business models’: Gumtree began solely in London, however, as the name and brand grew, so did the business. Opening new sister websites for the different countries which were expanded into. This was all part of a gradual process as the demand grew, so did the business. However, in order to better manage the business, the business was split into logical geographical divisions.

‘Outsource whenever practical and possible’ & ‘Provide outsource infrastructure, function, and expertise’: In a two-for-one deal, Gumtree has outsourced its content creators to the community, while also providing an infrastructure which others can use to advertise. This is an excellent, but also very simple example of outsourcing.

Scale your pricing and revenue models: Both buying and selling using Gumtree is free for all users, however, advertisers are able to promote their ads for a cost. This means that advertisers can utilise the features they can afford, or deem necessary to sell their item, service, or draw attention to their ad.

watch your business grow with scalability

Market virally: Gumtree makes it easy for users to share their findings, by incorporating Facebook, Twitter, Google +, Pinterest and email sharing functionalities into the website design. This assists in driving traffic to the website through word-of-mouth.

Taking all of this into account, it is clear that Gumtree has successfully and efficiently taken on the principles of O’Reilly’s core pattern of Web 2.0, Lightweight Models & Cost-Effective Scalability.


Signing off for the final time in my duties as student of INB347, I thank you for reading and please feel free to leave feedback!


– Matt



Leveraging the Long Tail of online music

Soundcloud is a music distribution platform which allows collaboration, promotion and distribution of music by its users. Anyone can upload their own music, whether it be recordings of their own music, remixes or covers of existing songs, or anything else you can think of.

Build on the driving forces of the Long Tail: Soundcloud has made it possible for anyone who creates a free account to upload music, making as much music as possible available, upon the uploaders discretion, Soundcloud also allows the free downloading of music, either, limited to a certain number of downloads, or unlimited, decreasing the cost of consumption. Finally, this online sharing platform means that, anyone, anywhere, with an internet connection has access to either sharing their own music with the world, or discovering unfound artists.

Use algorithmic data management to match supply and demand: At present Soundcloud offers a ‘creators to follow’ feature based upon your ‘plays’, however, there are no similar tracks or similar artist features.

‘Use an architecture of participation to match supply and demand’ & ‘Leverage customer self-service to cost effectively reach the entire web’: Soundcloud allows the self-service model of accounts, meaning users create, upload, use and download content at their own discretion. This desire to share or discover content over the web is what drives the demand and supply of this service.

Soundcloud is an excellent audio sharing service which provides a platform for undiscovered artists or little known artists to reach their individual niche markets, and even potentially grow these markets.

Thanks for reading and please feel free to leave feedback!

– Matt

Team Perpetual Beta

Many online games now take on the form of a perpetual beta. Rather than simply purchasing a game and completing it, then purchasing another, gamers have been granted the ability to quit the purchase and complete cycle and continue to play the same game, which is forever changing. Team Fortress 2 takes on this Perpetual Beta model.

With regular updates to the game, new weapons, items, maps and game types, players rarely grow bored of playing the same game. This Perpetual Beta model also benefits the game at the marketplace, with the game appearing as more of an investment, rather than a purchase.

So let’s see how Team Fortress 2 has applied the best practices of O’Reilly’s core pattern of Web 2.0, the Perpetual Beta.

Release early and release often: The developers have built Team Fortress 2 from the ground up. The game has changed from a basic multi-player game, to an advanced, feature-packed and interactive game, all thanks to its regular update releases.


Engage users as co-developers and real-time testers: Players are actually able to create their own servers, maps, and game-modes. They are also able to submit designs for their own weapons and items, where the community can then vote on the items to become a reality in-game.

Instrument your product: Much the same as described above, gamers are able to modify game types on their servers, meaning the developers can monitor and acknowledge what gamers desire out of their game-play experience.

Incrementally create new products: As already mentioned, Team Fortress 2 regularly has new weapons, items and maps added to the game, maintaining the interest of gamers over the long-term.

Make operations a core competency: The behind the scenes operations of Team Fortress 2 are also quite good. The game runs very smoothly and with few errors, considering the widespread, live nature of the collaborative game-play.

I believe that Team Fortress 2 provides an excellent example of the Perpetual Beta concept.

Thanks for reading and please feel free to leave feedback!

– Matt


Evernote logo

The shopping list, the to-do-list, the quick mental note, verbal instruction or visual reminder. Whether they were written, draw or printed, these were all once kept as hard copies, but, no longer. Evernote provides all of these functions, and more, synchronised seamlessly onto any device. Whether the note is text, voice, video or imagery, all content can be shared between devices with ease.

Tim O’Reilly’s core pattern for Web 2.0 ‘Software above the level of a single device’ refers to this type of seamless integration between devices. Meaning, the same content can be accessed and interacted with across a variety of devices. This is all possible through use of the cloud, a concept that would have seemed futuristic only a few years ago, however, it now forms part of an expectation of users.Capture anything

So let’s get right into it and see how Evernote has met the 7 Best Practices of ‘Software above the level of a single device’.

Design from the start to share data across devices, servers, and networks: Evernote was originally designed as a web service, immediately making it possible for users to access the note-taking software from any internet capable device. This smart moved allowed the company access to the widest possible market, at the lowest possible start-up cost. Over the next few years, Evernote clients became available for a wide variety of operating systems for both PCs, mobiles, tablets and even plug-ins for web browsers. It is clear that Evernote was designed, from the beginning to be used across a wide variety of devices, servers and networks.

Think location aware: Evernote, by default will title any notes made, according to the users GPS location when the note is made, where this GPS capability is available. While the user can overwrite the title, the location will remain stored in the note details. While this is a simple feature to include, this can also help to jog users memory’s when they haven’t left themselves a clear enough note, or look at the note a significant time after and do not recollect their train of thought at the time.

Extend Web 2.0 to devices: It is clear that Evernote allows users to use many functionalities of Web 2.0 on their mobile devices. Users are able upload and edit their notes from any device.

Use the power of the network to make the edge smarter: While Evernote has not particularly exhibited this best practice, short of data storage on the cloud, there is no real need for this in the purpose for this software.

Leverage devices as data and rich media sources: This is the exact, specific purpose of Evernote, to use devices to create content, anywhere, and at any time. The mobile data creation allows users to seamlessly access their data across their devices, without the need to manually transfer between devices.

Make one-click peer-production a priority: This best pracitces refers to the ease-of-use of the software and how it seemlessly uploads, shares or downloads data. Evernote automatically uploads all notes to the cloud, and automatically downloads all existing notes when the application is opened. An example of zero-click data sharing.

Enable data location independence: As already discussed, data is available on any device, anywhere, at any time, completing the best practices of ‘Software above the level of a single device’.

Evernote truly is a great note-taking software, with great capability. The seamless integration between devices coupled with the simple, yet feature packed interface is a sure-fire winner.


Thanks for reading and please feel free to leave feedback!


– Matt

Presenting a Rich User Experience

No longer is Microsoft Powerpoint the easiest way to create an aesthetically pleasing presentation, make welcome, Prezi. Since 2008, the ‘zooming presentation tool’ has been alive and kicking, and has changed the concept of presentations for the better. The the better part of 50 years, not much has happened to presentations beyond the ‘slide’, whereas now you can click and zoom on different areas of your presentation to see how they relate.

While, by default, Prezi is cloud-based, it is also possible to download Prezi Desktop, a desktop application, with a premium account. Much like Microsoft Powerpoint, this then means that it can all be there on your desktop rather than having to access the cloud to edit your presentations.

image courtesy of DesignMind

Prezi has been such a great success due to its exemplar application of the best practice of O’Reilly’s fourth core pattern of Web 2.0, creating a Rich User Experience.

Combine the best of desktop & online experiences Usability and simplicity : Prezi has hit the nail on the head with this one. By allowing users excellent usability from a web-based editor, it has seen widespread and rapid growth of its user base. It’s simple to use, the end result looks excellent and there’s plenty of room for user customisation.

Search over structure: While Prezi haven’t exactly employed this practice, there is no need for it, the structure itself is quite impressive. It’s very simplistic, easy-to-use and there is no need to search for anything. However, within the website itself, there are search functions utilised.

Preserve content addressability: Prezi makes it easy, you can either download, or directly share your presentations from the application itself.

Prezi really is a great and innovative tool, try it, but you may not want to go back to Microsoft Powerpoint once you do.

Thanks for reading and please feel free to leave feedback!

– Matt

Innovation in Historic Assembly

Music Concourse and Pavilion in Golden Gate Park

Ever wondered what your neighbourhood, block, street, or even house used to look like? Well thanks to the readily available Google Maps and Street View API’s now you can.

SepiaTown is an historic photograph sharing platform which allows users to share images from anywhere around the globe, pinpointing the location the image was taken from, allowing us to look at the same locations, hundreds of years apart. While this is an excellent example of O’Reilly’s third core pattern of Web 2.0, Innovation in Assembly, once again, the strong link to the Harnessing of Collective Intelligence is also present.

But what exactly is Innovation in Assembly? This core pattern refers to the concept that Web 2.0 applications are more than just an application in themselves, they can also be used to built upon and re purposed to create new innovative ideas, with a simple modification of a pre-existing idea.

Open API Timeline

image courtesy of ProgrammableWeb

This is most commonly done through the availability of API’s for Web 2.0 applications. As shown in the above image, over the past decade, the number of available API’s has increased dramatically, evidencing the increased level of innovation in assembly that is occurring.

SepiaTown exhibits this concept perfectly. By building upon both the Google Maps and Google Street View API’s SepiaTown has created a new use for the Google Maps interface, while maintaining functional similarities of Google, making usability of the application easy to adopt.

Sydney, Australia

Sydney, Australia

As shown in the above screen capture, photos that have been posted can be clicked on to display the image and whatever information has been attached to it on the left hand side of the screen, however, with a simple click of the ‘then/now’ button, street view of the same location is displayed on the right, comparing the then-and-now of the location.

This application the following best practices of Innovation in Assembly. This just a few examples of how these have been applied.

Offer API’s to your service & design for remixability: In response to a number of applications developed to hack into the Google Maps application, Google released an API for the application, in 2006, just six months after the application itself was released.

Build your business model into your API: Google has arguably been quite generous to allow thousands of applications to build upon their application for free. However, after 6 years of free service, and allowing thousands of applications to become dependant on Google Maps, Google began to charge for its services. However, only high traffic webpages would be affected. The first 25,000 users were free, while $4 was charged for every 1,00 users after that. Overall, this did not pose a huge cost to the applications using the API’s, but allowed Google to capitalise on its success.

Use Web 2.0 to support your platform: Google is arguably one of the best at applying Web 2.0 principles. Their success was largely derived from their simplistic, easy-to-use systems.


so take a trip through time and explore your neighbourhood on SepiaTown!


Thanks for reading and please feel free to leave feedback!

– Matt

Captcha-ing More Than You Know

CAPTCHAs, we all know them, we’ve all used them. They’re for security right? To tell us from the robots? You’d be right to think that they are in fact a security feature used on hundreds of thousands of websites, or at least this was their initial purpose.

an example of a CAPTCHA

Standing for Completely Automated Public Turing test to tell Computers and Humans Apart, CAPTCHA’s came to life in 2000 and were the brainchild of Carnegie Mellon University graduate, Luis von Ahn. Taking the system a step further, the project was renamed reCAPTCHA when it was redesigned to add further layers of distortion on top of the text to beat the hackers. But that wasn’t all, the next development was the true genius behind the system. The proof? Google bought it.

Google Acquires reCAPTCHA

Luis realised that every day in excess of 100 million CAPTCHAs were being completed, providing the opportunity to use words tagged as unreadable in the digitising of books and other printed materials as CAPTCHA phrases. In doing this, Luis has displayed an excellent example of a combination of O’Reilly’s first and second core patterns of Web 2.0, Harnessing Collective Intelligence, as discussed last week, and the idea that ‘Data is the next Intel Inside’.

This concept refers to the increased importance and reliance on data in Web 2.0 applications. The reCAPTCHA system exhibits this by fulfilling 4 of the 5 best practices of the core pattern.

Seek to own a unique, hard to recreate source of data: Luis von Ahn did just this in the creation of reCAPTCHA, as evidenced though Google’s decision to purchase it from him. Seeing the value and complexity of the system, Google capitalised on the opportunities created through the acquisition.

Enhance the core data:  Under the reCAPTCHA project, the data was enhanced and re-purposed to provide the dual benefit of security, along with the deciphering of printed texts.

Let users control their own data: Prior to acquisition by Google, Luis had created the opportunity for users to submit ‘unreadable’ words to the system for deciphering, however, it is unclear whether Google keeps this privilege exclusively for themselves.

Define a data strategy: There is a very clear data strategy being used for reCAPTCHA. Providing a security system to ensure only humans are accessing or posting data, as well as double purposing and deciphering texts tagged as unreadable by current text recognition software.

Google really is CAPTCHA-ing more than we thought.


Thanks for reading and please feel free to leave feedback!


– Matt