As part of our continued effort to collaborate with teachers and help students get a better sense of places across the globe, we also announced that Google Earth Pro is now available to educators for free through the Google Earth for Educators site. Educators from higher educational and academic institutions who demonstrate a need for the Pro features in their classrooms can now apply for single licenses for themselves or site licenses for their computer labs. A similar program exists for SketchUp Pro through the Google SketchUp Pro Statewide License Grant, which is currently being provided via grants to 11 states, and available to all others at the K-12 level at no cost.
In conjunction with these exciting Geo-related events and announcements, the Geo Education team also thought it’d be timely and fun to test Googlers’ geographic knowledge by hosting the company’s first-ever Google Geo Bee. With help from National Geographic, 68 teams relived their school years and took a written geography exam, competing for a spot on stage with Alex Trebek, who hosted the main event. The competition was based on the group version of the National Geographic Bee for students, which Google has sponsored for the past two years. Questions included those like “Which country contains most of the Balkan Mountains, which mark the boundary between the historical regions of Thrace and Moesia?” and “Ben Nevis, the highest peak in the United Kingdom, is located in which mountain chain?”
The winners of our Google Geo Bee: Ian Sharp, Marcus Thorpe and Rob Harford
The final three Google teams (the Tea-Drinking Imperialists, the Geoids and the Titans) all showed off their geographic literacy and answered a plethora of diverse and complex questions. In the end, it was the Tea-Drinkers who emerged the winners when they figured out that Mecca was the answer to the clue, “Due to this city’s location on a desert trading route, many residents were merchants, the most famous of whom was born around A.D. 570.” And they didn’t just walk away with bragging rights; thanks to Sven Linblad from Linblad Expeditions, they also won an amazing adventure trip to either the Arctic, the Galapagos or Antarctica.
Through all of these education efforts — for teachers, students and grown-up Googlers alike — we hope people of all ages never stop exploring.
On behalf of Ridley Scott, Kevin Macdonald, LG, the Sundance Film Festival and all of us at YouTube, thank you to everyone who took part in “Life in a Day.” Using the footage you shot, Kevin will now begin to build the world’s largest user-generated documentary, capturing what it was like to be alive on July 24, 2010.
Remember that even though filming day is over, you have until July 31 at 11:59 p.m. PST to upload your video to the Life in a Day channel. Be sure to subscribe as well, so you can receive directorial updates from the cutting room floor. If your video is selected for inclusion in the final film, you'll be hearing from Life in a Day Films, so be on the lookout for an email.
We'll be in touch again in early January with more details on the film's premiere at Sundance.
Congratulations to everyone.
Posted by Nate Weinstein, Entertainment Marketing Associate
What are you doing today? Something routine like cooking breakfast or taking the dog for a walk? Or is it something extraordinary like your child's first soccer game or your wedding day?
Whatever it is, big or small, we hope you’ll capture it on video and take part in "Life in a Day", a user-generated documentary that will tell the story of a single day on Earth, as seen through your eyes. You have until 11:59 p.m. local time to film something, so get going. For more information, visit the Life in a Day channel.
Get out those cameras and let's make film history.
Posted by Nate Weinstein, Entertainment Marketing Associate
For artists, YouTube is a 21st century canvas. Since the YouTube Play project was announced last month, more than 6,000 videos ranging in genres, topics and budget have been submitted from 69 countries, and the YouTube Play channel has received over 2 million views.
Today, we’re unveiling the jury for YouTube Play, which includes some of the world’s leading artists, from international film festival winners and renowned photographers to performance and video artists on the cutting edge of art.
Over the course of the next few months, these jurors will watch countless hours of videos submitted by the international YouTube community and select the most creative and inspiring work to showcase at the Guggenheim museums in October.
Already, this campaign has drawn some remarkable talent, and we’re looking forward to seeing more of your submissions in our quest to find the most creative video art in the world and showcase it alongside van Gogh and Picasso. The deadline for getting your videos in is July 31. For more information about the jurors and to learn more about how to participate, check out youtube.com/play.
By contracting to purchase so much energy for so long, we’re giving the developer of the wind farm financial certainty to build additional clean energy projects. The inability of renewable energy developers to obtain financing has been a significant inhibitor to the expansion of renewable energy. We’ve been excited about this deal because taking 114 megawatts of wind power off the market for so long means producers have the incentive and means to build more renewable energy capacity for other customers.
We depend upon large quantities of electricity to power Google services and want to make large actions to support renewable energy. As we continue operating with the most energy efficient data centers and working to be carbon neutral, we’re happy to also be directly purchasing energy from renewable resources.
Posted by Urs Hoelzle, Senior Vice President, Operations
In fourth place, South Koreans were remarkably loyal even though some games began at 3:30am Seoul time. Japan, Australia and New Zealand, also affected by time-zone differences, expressed much less interest. A few countries searched more, not less. But only Honduras and North Korea increased significantly.
During the knockout rounds, each match’s losing team is eliminated from the tournament. As fewer and fewer teams remain, we expected increased worldwide interest in each remaining game. Unsurprisingly, worldwide queries slowed the most during the final game between the Netherlands and Spain, but the round-of-16 Germany v. England game had the second largest query decrease. Semi-finals and quarter-finals were all popular except for semi-final Uruguay v. Netherlands, during which queries actually increased.
In Latin American countries, search volume dropped more steeply leading into and out of matches while, in Europe, searches ramped down and up more gradually. Of course, for games that went into extra time and penalty shootouts the drops deepened the longer the match went on, including Paraguay v. Japan, Netherlands v. Spain, and Uruguay v. Ghana as seen here:
Finally, no blog post about the World Cup would be complete without a look at what did drive people to search—after the final match, of course. Although he won neither the Golden Boot (for the most World Cup goals) nor the Golden Ball (for best player) last weekend, Spain’s David Villa is winning in search compared to the recipients of those two honors—Germany’s Thomas Müller and Uruguay’s Diego Forlán—and Dutch midfielder Wesley Sneijder. All of these men competed for the Golden Boot with five goals apiece.
Similar to when Carlos Puyol headed in the single goal that put Spain in the final, people flocked to the web to search for information on Andres Iniesta, the “quiet man” who scored the one goal that led his country to its first World Cup championships. They were also interested in Dani Jarque, a Spanish footballer who died last fall and whose name was emblazoned on Iniesta’s undershirt, which he displayed after his goal. And after the match, searches for keeper Iker Casillas skyrocketed to a higher peak than any other popular footballer—including household names like Ronaldo, Villa and Messi—reached during the Cup. Sometimes, it seems, goalies get the last word.
We hope you enjoyed our series of posts on World Cup search trends and we’ll see you in Brazil in 2014!
Posted by Jeffrey D. Oldham, Software Engineer and Robert Snedegar, Technical Solutions Engineer
We believe that translation is key to our mission of making information useful to everyone. For example, Wikipedia is a phenomenal source of knowledge, especially for speakers of common languages such as English, German and French where there are hundreds of thousands—or millions—of articles available. For many smaller languages, however, Wikipedia doesn’t yet have anywhere near the same amount of content available.
To help Wikipedia become more helpful to speakers of smaller languages, we’re working with volunteers, translators and Wikipedians across India, the Middle East and Africa to translate more than 16 million words for Wikipedia into Arabic, Gujarati, Hindi, Kannada, Swahili, Tamil and Telugu. We began these efforts in 2008, starting with translating Wikipedia articles into Hindi, a language spoken by tens of millions of Internet users. At that time the Hindi Wikipedia had only 3.4 million words across 21,000 articles—while in contrast, the English Wikipedia had 1.3 billion words across 2.5 million articles.
We selected the Wikipedia articles using a couple of different sets of criteria. First, we used Google search data to determine the most popular English Wikipedia articles read in India. Using Google Trends, we found the articles that were consistently read over time—and not just temporarily popular. Finally we used Translator Toolkit to translate articles that either did not exist or were placeholder articles or “stubs” in Hindi Wikipedia. In three months, we used a combination of human and machine translation tools to translate 600,000 words from more than 100 articles in English Wikipedia, growing Hindi Wikipedia by almost 20 percent. We’ve since repeated this process for other languages, to bring our total number of words translated to 16 million.
We’re off to a good start but, as you can see in the graph below, we have a lot more work to do to bring the information in Wikipedia to people worldwide:
Number of non-stub Wikipedia articles by Internet users, normalized (English = 1)
We’ve also found that there are many Internet users who have used our tools to translate more than 100 million words of Wikipedia content into various languages worldwide. If you do speak another language we hope you’ll join us in bringing Wikipedia content to other languages and cultures with Translator Toolkit.
We presented these results last Saturday, July 10, at Wikimania 2010 in Gdańsk, Poland. We look forward to continuing to support the creation of the world’s largest encyclopedia and we can’t wait to work with Wikipedians and volunteers to create more content worldwide.
It can’t have been very long after people started writing that they started to organize and comment on what was written. Look at the 10th century Venetus A manuscript, which contains scholia written fifteen centuries earlier about texts written five centuries before that. Almost since computers were invented, people have envisioned using them to expose the interconnections of the world’s knowledge. That vision is finally becoming real with the flowering of the web, but in a notably limited way: very little of the world’s culture predating the web is accessible online. Much of that information is available only in printed books.
A wide range of digitization efforts have been pursued with increasing success over the past decade. We’re proud of our own Google Books digitization effort, having scanned over 12 million books in more than 400 languages, comprising over five billion pages and two trillion words. But digitization is just the starting point: it will take a vast amount of work by scholars and computer scientists to analyze these digitized texts. In particular, humanities scholars are starting to apply quantitative research techniques for answering questions that require examining thousands or millions of books. This style of research complements the methods of many contemporary humanities scholars, who have individually achieved great insights through in-depth reading and painstaking analysis of dozens or hundreds of texts. We believe both approaches have merit, and that each is good for answering different types of questions.
Here are a few examples of inquiries that benefit from a computational approach. Shouldn’t we be able to characterize Victorian society by quantifying shifts in vocabulary—not just of a few leading writers, but of every book written during the era? Shouldn’t it be easy to locate electronic copies of the English and Latin editions of Hobbes’ Leviathan, compare them and annotate the differences? Shouldn’t a Spanish reader be able to locate every Spanish translation of “The Iliad”? Shouldn’t there be an electronic dictionary and grammar for the Yao language?
We think so. Funding agencies have been supporting this field of research, known as the digital humanities, for years. In particular, the National Endowment for the Humanities has taken a leadership role, having established an Office of Digital Humanities in 2007. NEH chairman Jim Leach says: "In the modern world, access to knowledge is becoming as central to advancing equal opportunity as access to the ballot box has proven to be the key to advancing political rights. Few revolutions in human history can match the democratizing consequences of the development of the web and the accompanying advancement of digital technologies to tap this accumulation of human knowledge."
Likewise, we’d like to see the field blossom and take advantage of resources such as Google Books that are becoming increasingly available. We’re pleased to announce that Google has committed nearly a million dollars to support digital humanities research over the next two years.
Google’s Digital Humanities Research Awards will support 12 university research groups with unrestricted grants for one year, with the possibility of renewal for an additional year. The recipients will receive some access to Google tools, technologies and expertise. Over the next year, we’ll provide selected subsets of the Google Books corpus—scans, text and derived data such as word histograms—to both the researchers and the rest of the world as laws permit. (Our collection of ancient Greek and Latin books is a taste of corpora to come.)
We've given awards to 12 projects led by 23 researchers at 15 universities:
Steven Abney and Terry Szymanski, University of Michigan. Automatic Identification and Extraction of Structured Linguistic Passages in Texts.
Elton Barker, The Open University, Eric C. Kansa, University of California-Berkeley, Leif Isaksen, University of Southampton, United Kingdom. Google Ancient Places (GAP): Discovering historic geographical entities in the Google Books corpus.
Dan Cohen and Fred Gibbs, George Mason University. Reframing the Victorians.
Gregory R. Crane, Tufts University. Classics in Google Books.
Miles Efron, Graduate School of Library and Information Science, University of Illinois. Meeting the Challenge of Language Change in Text Retrieval with Machine Translation Techniques.
Brian Geiger, University of California-Riverside, Benjamin Pauley, Eastern Connecticut State University. Early Modern Books Metadata in Google Books.
David Mimno and David Blei, Princeton University. The Open Encyclopedia of Classical Sites.
Alfonso Moreno, Magdalen College, University of Oxford. Bibliotheca Academica Translationum: link to Google Books.
Todd Presner, David Shepard, Chris Johanson, James Lee, University of California-Los Angeles. Hypercities Geo-Scribe.
Amelia del Rosario Sanz-Cabrerizo and José Luis Sierra-Rodríguez, Universidad Complutense de Madrid. Collaborative Annotation of Digitalized Literary Texts.
Andrew Stauffer, University of Virginia. JUXTA Collation Tool for the Web.
Timothy R. Tangherlini, University of California-Los Angeles, Peter Leonard, University of Washington. Northern Insights: Tools & Techniques for Automated Literary Analysis, Based on the Scandinavian Corpus in Google Books.
We have selected these proposals in part because the resulting techniques, tools and data will be broadly useful: they’ll help entire communities of scholars, not just the applicants. We look forward to working with them, and hope that over time the field of digital humanities will fulfill its promise of transforming the ways in which we understand human culture.
Posted by Jon Orwant, Engineering Manager for Google Books, Magazines and Patents