Posts filed under ‘semantic web’

John Battelle revisits the Database of Intentions

An excellent read at http://bit.ly/aqKu3p from both Battelle himself and the many comments on how the Database of Intentions has evolved from the early years of the century when Google started to come of age.

With the ‘eruption’ of Web 2.0 he now identifies four key fields in the database and four signals:

  • The Purchase → “What I buy” (fifth added later: http://bit.ly/b3DjHn)
  • The query → “What I want”
  • The social graph → “Who I am” / “Who I know”
  • The update → “What I’m doing” / “What’s happening”
  • The check-in → “Where I am”

Battelle argues for a more catholic definition of search – an extension from web search to include these other signals.

Not so sure, myself. Whilst these signals and many other signals  are vitally important to the evolution of the web, I’m inclined to agree with the commentator who suggests that social graph, updates and check-ins are refinements or attributes to intention rather than fields themselves.

Also surprised there is no reference to semantics or Semantic Web – aggregation, filtering and pull around a user’s intention or need.

Still, the key message for me is in his conclusion:

If you’re not viewing your job to be a curator, clarifier, interpreter, and amplifier of the Database of Intentions, you’re soon going to be out of business. The Database of Intentions is the fuel that drives media platforms, and as I’ve argued elsewhere, every business is now a media business.

8 March 2010 at 23:20 Leave a comment

RDFa implementation on UK Civil Service job search

A very useful primer on a job search RDFa, by Mark Birkbeck, who wrote it.

Although I couldn’t replicate any Yahoo! examples, for those interested, if you have a Yahoo! account you can download this Structured data display add-on to Yahoo! search:

When Yahoo! finds any structured data it is displayed in the results – see attachment.

Yahoo! result showing structured data

So at least we have proof that a generic web search engine can parse the data.

28 April 2009 at 20:35 Leave a comment

The case for strong narratives

A former colleague, Silver Oliver, makes the case for web-scalable narratives. Music to my ears:

“As we build larger and larger websites it becomes increasingly difficult to scale meaningful user journeys.  Success is dependent on indentifying your key user journeys (narrative structures) and ensuring these can be dynamically populated as the site grows.”

He argues that, in contrast to tags which “help to open up new user journeys but are weak in narrative, taking the form ‘this content is about this tag’”; there is a need to think about the right primary narrative structures and to encode these user journeys into the very core of the site.

Oliver cites well known examples:

  • Customers Who Bought This Item Also Bought – noun (book) verb (also bought) noun (book)
  • Buy it now – noun (user) verb (buy) noun (item)
  • Such and such wrote on your Wall – noun (friend) verb (wrote on) noun (wall)

and goes on to suggest they can be scalable to the semantic web using ontologies and domain models.

30 November 2008 at 20:59 Leave a comment

Semantic Analysis Technology

I attended “Semantic Analysis Technology: in Search of Categories, Concepts & Context“, the fourth ISKO UK KOnnecting KOmmunities event on 3 November 2008 at University College London.

First up were presentations from two vendors, Luca Scagliarini and Jeremy Bentley.

Scagliarini argued that information discovery suffers from information overload and information underload due to a lack of meaning-based text processing. Free text search and shallow automatic linguistic analysis did not do the job, but a ‘deep semantic analysis’ based on the analysis of relationships and ‘understanding’ the meaning that is encoded in the relationships between verbs, prepositions and nouns demonstrates potential.

Bentley reviewed key information organisation issues – unstructured information, the doubling of number of resources every 19 months, ‘findability’ problems and and the how black box solutions may not do the job. He discussed the relevance of metadata and taxonomies built specifically to reflect the way an organisation workss.

Later, practitioners presented – Rob Lee and Helen Lippell, Karen Loasby and Silver Oliver.

Lee talked about Muddy Boots, a BBC project to support the BBC’s remit to link to more external sources. Lee illustrated how structured datasets in the public domain could be used to contextualise and index BBC resources and exploit the semantic richness to link to find meaningful external links.

Lippell, Loasby and Oliver discussed three different implementations of auto-categorisation systems, demonstrating advantages and issues with each approach. The approaches were:

  • using Verity Intelligent Classifier (VIC) and a taxonomy with a set of rules that could be finely tuned
  • applying a rule-based automatic classification system combined with the author’s review and corrections to produce BBC content that could be described in detail. The approach
  • a “statistical-based auto-categorisation” project designed to connect and cross-reference distributed BBC content and resources horizontally

8 November 2008 at 23:23 Leave a comment


Categories

August 2017
M T W T F S S
« Sep    
 123456
78910111213
14151617181920
21222324252627
28293031  

Twitter Updates