Archive for the ‘Search’
In-Q-Tel, the investment arm of the Central Intelligence Agency and Google Ventures are investing in Recorded Future, a company whose technology monitors the Web in real time and develops predictions of future events from the content, according to reports.
Recorded Future scours tens of thousands of Web sites, blogs and Twitter accounts to find the relationships between people, organizations, actions and incidents — both present and still to come. The company’s “temporal analytics engine” goes beyond search by “looking at the ‘invisible links’ between documents that talk about the same, or related, entities and events.”
When the data reveals a possible future event, Recorded Future can — so it claims — trace the online “momentum” for the event and predict where and when it might actually happen.
“More than 113 billion searches were conducted in July 2009, representing a 41-percent increase versus year ago. Google Sites attracted significantly more searches than any other engine with 76.7 billion searches conducted, or 67.5 percent market share. […]
Among the five global regions, Europe accounted for the highest share of searches at 32.1 percent, followed by Asia Pacific (30.8 percent) and North America (22.1 percent).”
“In just over three months, Internet Explorer has seen its overall market share erode by 11.4 percent. Where did that go? It went to Firefox, Safari, and Chrome. Nearly 5 percent of that, or about half, went to Firefox 3.0, which currently has 27.6 percent market share.”
Read full post here.
Wolfram has just posted about the effort, which has taken years of working in stealth and involves more than a hundred workers. He explains the basics of how his “computational knowledge engine” works: You ask it factual questions (such as “How many protons are in a hydrogen atom?”), and it computes answers for you.
Many details about the engine, scheduled to launch in May, have yet to be released. However, Wolfram has shown it to search engine expert Nova Spivack. In a long post, Spivack calls the effort “almost absurdly ambitious” but concludes that it works, and claims that the engine has the potential to touch our lives as deeply as Google.
The engine doesn’t return documents that might contain the answer, like Google does, and it isn’t a giant database, like Wikipedia. Nor does it resort to natural language to return documents, like Powerset does. Rather, Wolfram has created a proprietary system based on fields of knowledge, containing terabytes of curated data and millions of lines of algorithms to represent real-world knowledge as we know it.
You ask it questions in a bar that looks very much like Google’s search bar, but it uses natural language to understand your question or even abbreviated notation. It then provides detailed answers.
Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research.
A typical search generates about 7g of CO2. Boiling a kettle generates about 15g. “Google operates huge data centres around the world that consume a great deal of power,” said Alex Wissner-Gross, a Harvard University physicist. Wissner-Gross has submitted his research for publication by the US Institute of Electrical and Electronics Engineers and has also set up a website www.CO2stats.com.
With more than 200m internet searches estimated globally daily, the electricity consumption and greenhouse gas emissions caused by computers and the internet is provoking concern. A recent report by Gartner, the industry analysts, said the global IT industry generated as much greenhouse gas as the world’s airlines – about 2% of global CO2 emissions.
Google has been moved to respond on its official blog, saying that this estimate is *many* times too high. Here is the link to the blog.