Posts from 2008
A Toast from the Host
Friday, December 19, 2008
It's been an amazing year for Google Code's Open Source project hosting service. In 2008, our team improved the service by adding source code browsing, project feeds, project updates, a code review tool, content licenses, gadgets, better issue tracking, wiki enhancements, and increased storage quota. Check out our What's New page for further details.
Likewise, Google's own Open Source efforts expanded greatly in 2008 with new releases of everything from Chromium, V8, Android, Doctype, and Native Client, to GXP, Protocol Buffers, and tools for mocking and testing. Meanwhile, our existing Open Source efforts continue full steam on Google Web Toolkit, Gears, Guice, Ganeti, and many others. Google itself is now using Google Code to host over 200 open source projects, large and small. For the full list, just search for label:google.
However, the best part of 2008 came from you, the Open Source community. The number of active projects that we host more than doubled in 2008, everything from games to mobile to development tools. Overall participation on the site increased threefold. And our project hosting tools are now used by some of the projects that help drive the open web, such as JQuery and Firebug.
From everyone on the Open Source team, thank you for a great 2008. Keep it coming in 2009!
GeoWebCache 1.0 and Google Summer of Code
Wednesday, December 17, 2008
By Arne Kepp, OpenGeo
OpenGeo and the GeoWebCache team are pleased to announce the release of version 1.0 (download). GeoWebCache is a tile cache for web mapping servers designed to significantly improve the performance of your service and provide easy integration with software such as OpenLayers, Google Maps, Microsoft Virtual Earth and Google Earth. The response time for cached tiles is measured in milliseconds, making it possible to serve hundreds of simultaneous clients using modest hardware.
GeoWebCache has benefited greatly from contributions resulting from the Google Summer of Code™ program. In 2007, Chris Whitney developed jTileCache, the starting point from which GeoWebCache has gradually evolved. This year, Marius Suta, primarily focused on a REST API for configuration, spent the summer on GeoWebCache, and the resulting functionality is available in our 1.0 release. GeoWebCache also underwent improvements to cache KML with support from Google's Open Source Programs Office as part of a project to enable GeoServer to make placemarks and vector data available through Super Overlays. The new Google Earth functionality in GeoServer can be seen in version 1.7.1, which ships with an integrated GeoWebCache.
GeoWebCache will continue to evolve at a rapid pace with a continually growing open source community. Planned features include the ability to let users specify what WMS parameters can be varied by the clients, so that each layer can forward filters and support multiple sets of tiles. Also, steps are being taken to enable GeoWebCache to automatically expire tiles as the data changes on the backend or styles are modified. To facilitate these features, storage of tiles and meta information will be improved. A nice, AJAX based frontend for tasks such as configuration, seeding and testing the layers is also in the works.
OpenGeo and the GeoWebCache team are pleased to announce the release of version 1.0 (download). GeoWebCache is a tile cache for web mapping servers designed to significantly improve the performance of your service and provide easy integration with software such as OpenLayers, Google Maps, Microsoft Virtual Earth and Google Earth. The response time for cached tiles is measured in milliseconds, making it possible to serve hundreds of simultaneous clients using modest hardware.
GeoWebCache has benefited greatly from contributions resulting from the Google Summer of Code™ program. In 2007, Chris Whitney developed jTileCache, the starting point from which GeoWebCache has gradually evolved. This year, Marius Suta, primarily focused on a REST API for configuration, spent the summer on GeoWebCache, and the resulting functionality is available in our 1.0 release. GeoWebCache also underwent improvements to cache KML with support from Google's Open Source Programs Office as part of a project to enable GeoServer to make placemarks and vector data available through Super Overlays. The new Google Earth functionality in GeoServer can be seen in version 1.7.1, which ships with an integrated GeoWebCache.
GeoWebCache will continue to evolve at a rapid pace with a continually growing open source community. Planned features include the ability to let users specify what WMS parameters can be varied by the clients, so that each layer can forward filters and support multiple sets of tiles. Also, steps are being taken to enable GeoWebCache to automatically expire tiles as the data changes on the backend or styles are modified. To facilitate these features, storage of tiles and meta information will be improved. A nice, AJAX based frontend for tasks such as configuration, seeding and testing the layers is also in the works.
New File Systems Added to MacFUSE
Tuesday, December 16, 2008
By Amit Singh, Engineering Manager - Mac Development Team
The MacFUSE State of the Union Talk video is now available. The source code for the new file systems discussed and demonstrated during the talk is also available. Please head over to the MacFUSE source repository for the following:
The MacFUSE State of the Union Talk video is now available. The source code for the new file systems discussed and demonstrated during the talk is also available. Please head over to the MacFUSE source repository for the following:
- AncientFS - a file system that lets you mount ancient, and in some cases current-day Unix data containers as regular volumes on Mac OS X.
- UnixFS - a general-purpose abstraction layer for implementing Unix-style file systems in user space.
- ufs - a user-space implementation (read-only) of the UFS file system family.
- sysvfs -a user-space implementation (read-only) of the System V file system family.
- minixfs -a user-space implementation (read-only) of the Minix file system family.
C++ Mocking Made Easy
Thursday, December 11, 2008
Post by Zhanyong Wan, Software Engineer - Engineering Productivity Team
Since we open-sourced the Google C++ Testing Framework in July 2008, many people have asked us when we will release a mocking framework to go with it. You asked, we listened; today we released the Google C++ Mocking Framework under the New BSD License. It is inspired by popular Java mocking frameworks like jMock and EasyMock, and works on Linux, Windows, and Mac OS X. More details are on the Google Testing Blog. As usual, we are eager to hear from you, so please share your thoughts with us on the Google Mock Discussion Group!
Since we open-sourced the Google C++ Testing Framework in July 2008, many people have asked us when we will release a mocking framework to go with it. You asked, we listened; today we released the Google C++ Mocking Framework under the New BSD License. It is inspired by popular Java mocking frameworks like jMock and EasyMock, and works on Linux, Windows, and Mac OS X. More details are on the Google Testing Blog. As usual, we are eager to hear from you, so please share your thoughts with us on the Google Mock Discussion Group!
BSDers at the Googleplex
Wednesday, December 10, 2008
By Matt Olander and Murray Stokely, BSD Community
The meetBSD 2008 conference recently held at the Googleplex in Mountain View, California, USA brought together more than 150 users and developers of the various flavors of the BSD operating system. The conference featured some great speakers, including talks by Robert Watson, Philip Paeps, Kris Moore and many others. There was also a panel to discuss the Google Summer of Code™ program, hosted by Murray Stokely and Leslie Hawthorn of Google. They were joined on stage by former mentors and students from the FreeBSD and NetBSD projects to give an overview of the program, some of the amazing results, and some tips and stories about participating. Saturday's content wrapped up with impromptu breakout sessions to discuss PC-BSD, FreeBSD, security issues, and other topics.
After the first day of the conference, attendees were taken by bus to the Zen Buddha Lounge in Mountain View for a private party to celebrate the 15th Anniversary of the FreeBSD operating system. A great time was had by all and, like most birthday parties, this one included a cake! We went a step further though: our cake was shaped like the FreeBSD logo in 3D, complete with horns. Dr. Kirk McKusick had the honors of cutting the cake and handing out a few pieces.
Thanks to help from the Open Source Program Office at Google we were able to setup a new YouTube channel for technical BSD content, allowing us to upload high quality full hour-long videos of talks and tutorials from BSD Conferences. May of the talks from MeetBSD 2008 are already available, and videos from MeetBSD 2007 and NYCBSDCon 2008 have already been uploaded. You can view these videos at http://www.youtube.com/bsdconferences. You may also want to check out photos from the conference and aforementioned birthday party.
The main conference was followed by an invitation-only FreeBSD Developer Summit which was a great success. We had over 30 attendees from the FreeBSD Developer Community as well as engineers from Yahoo, NetApp, Isilon, QLogic, Huawei, Google, Juniper, Cisco, Facebook, ISC, Metaweb, and other technology companies using or looking at using FreeBSD. There were formal presentations on the first day, followed by less structured hacking during the second day. The agenda of talks for the first day is available here.
Hard at work at the FreeBSD Developer Summit
(photo credit: Murray Stokely)
The meetBSD 2008 conference recently held at the Googleplex in Mountain View, California, USA brought together more than 150 users and developers of the various flavors of the BSD operating system. The conference featured some great speakers, including talks by Robert Watson, Philip Paeps, Kris Moore and many others. There was also a panel to discuss the Google Summer of Code™ program, hosted by Murray Stokely and Leslie Hawthorn of Google. They were joined on stage by former mentors and students from the FreeBSD and NetBSD projects to give an overview of the program, some of the amazing results, and some tips and stories about participating. Saturday's content wrapped up with impromptu breakout sessions to discuss PC-BSD, FreeBSD, security issues, and other topics.
After the first day of the conference, attendees were taken by bus to the Zen Buddha Lounge in Mountain View for a private party to celebrate the 15th Anniversary of the FreeBSD operating system. A great time was had by all and, like most birthday parties, this one included a cake! We went a step further though: our cake was shaped like the FreeBSD logo in 3D, complete with horns. Dr. Kirk McKusick had the honors of cutting the cake and handing out a few pieces.
Thanks to help from the Open Source Program Office at Google we were able to setup a new YouTube channel for technical BSD content, allowing us to upload high quality full hour-long videos of talks and tutorials from BSD Conferences. May of the talks from MeetBSD 2008 are already available, and videos from MeetBSD 2007 and NYCBSDCon 2008 have already been uploaded. You can view these videos at http://www.youtube.com/bsdconferences. You may also want to check out photos from the conference and aforementioned birthday party.
The main conference was followed by an invitation-only FreeBSD Developer Summit which was a great success. We had over 30 attendees from the FreeBSD Developer Community as well as engineers from Yahoo, NetApp, Isilon, QLogic, Huawei, Google, Juniper, Cisco, Facebook, ISC, Metaweb, and other technology companies using or looking at using FreeBSD. There were formal presentations on the first day, followed by less structured hacking during the second day. The agenda of talks for the first day is available here.
Hard at work at the FreeBSD Developer Summit
(photo credit: Murray Stokely)
Finnish Summer Code 2008
Monday, December 8, 2008
By Sanna Heiskanen and Elias Aarnio - The Finnish Centre for Open Source Solutions (COSS)
Finnish Summer Code is a yearly project organized by FILOSI (the Finnish Linux and Open Source Initiative), which aims to support Finnish participation in significant Open Source projects and to strengthen the Open Source competencies which companies need. FILOSI, a joint project of institutes and companies involved in software development, operates as a part of COSS, the Finnish Centre for Open Source Solutions. The program recently concluded for 2008, and five students participated in the program this year. Juuso Alasuutari improved LASH Audio Session Handler, and Sakari Bergen's project was about improving Ardour. Niklas Laxström worked with MediaWiki translation support and Olli Savolainen's topic was the Moodle Quiz. You can read the final reports for each of our student's projects on the Finnish Summer Code site.
Also known as Wellark, Antti Kaijanmäki is a 21 year old student of digital and computer technology at Tampere University of Technology. He participated in the Finnish Summer Code project in 2008 and worked on improvements that make mobile networking in Linux easier. Specifically, he programmed a settings assistant for mobile networking devices for Linux, and it is now possible to set up mobile networking with a graphical clickthrough tool. The tool also contains settings for major operators throughout the world. Antti's work was been published first in Ubuntu 8.10 and the overall comfort and performance of Network Manager has been praised in different reviews and blogs. Given all the buzz over his work, we thought we'd sit down with Antti and ask him a bit more about his project.
(Note: Site in in Finnish. If you would like to view the site in English, you can select this option from the page's top navigation menu.)
Finnish Summer Code is a yearly project organized by FILOSI (the Finnish Linux and Open Source Initiative), which aims to support Finnish participation in significant Open Source projects and to strengthen the Open Source competencies which companies need. FILOSI, a joint project of institutes and companies involved in software development, operates as a part of COSS, the Finnish Centre for Open Source Solutions. The program recently concluded for 2008, and five students participated in the program this year. Juuso Alasuutari improved LASH Audio Session Handler, and Sakari Bergen's project was about improving Ardour. Niklas Laxström worked with MediaWiki translation support and Olli Savolainen's topic was the Moodle Quiz. You can read the final reports for each of our student's projects on the Finnish Summer Code site.
Also known as Wellark, Antti Kaijanmäki is a 21 year old student of digital and computer technology at Tampere University of Technology. He participated in the Finnish Summer Code project in 2008 and worked on improvements that make mobile networking in Linux easier. Specifically, he programmed a settings assistant for mobile networking devices for Linux, and it is now possible to set up mobile networking with a graphical clickthrough tool. The tool also contains settings for major operators throughout the world. Antti's work was been published first in Ubuntu 8.10 and the overall comfort and performance of Network Manager has been praised in different reviews and blogs. Given all the buzz over his work, we thought we'd sit down with Antti and ask him a bit more about his project.
Antti, what made you apply for the Finnish Summer Code?Many congratulations to Antti, Juuson, Niklas, Olli and Sakari for their accomplishments! We would also like to thank Google for once again sponsoring the program. Next year’s Finnish Summer Code application period starts in January 2009. Keep an eye on the program website for more details.
I have had the idea of the project in my mind for few years. I don't remember where I heard about the Finnish Summer Code, but because I already had the idea I decided to send an application.
Does the concept of Finnish Summer Code excite students in general?
It's very exciting - you get to do what you want. The only thing that we laughed about was making our presentations, both in the applying stage and at the end. It requires quite a lot of work. For a normal summer job, you simply get the instructions and you do the work. The difference is rather clear.
What kind of advice would you give to a person considering applying for the Finnish Summer Code?
The motivation to do a Finnish Summer Code project goes hand in hand with being motivated to do general FOSS development. It is all about personal motivation. For me, it was very relevant that someone was interested in my work. Ubuntu activists Alexander Sack and Rubén Romero got excited about my project and gave me ideas and support. I also had friends who were interested in the project, which was also quite inspiring. When I'd completed my project, it felt good to see all the blog posts about how great it was - how my code just made things work.
What has this positive publicity felt like?
Of course it feels really good. One of the motivations to do FOSS work is helping others, and it feels great to achieve that goal. It feels good to solve a problem and, by doing so, help a lot of people. The mobile networking assistant is not yet perfect and it needs ongoing maintenance. The database that holds the settings of different operators is mostly taken from the already completed work by the GPRS Easy Connect project. Some changes have been done to update the database, and new changes are coming at a rate of approximately one per week.
There has been some uncertainty about who has done what. My part of the whole thing was the settings assistant – the guided clickthrough. The actual PPP connection stack was programmed by other contributors. As I had some extra time, I did the integration for the Network Manager applet, too. The Network Manager User/Interface is one of the challenges of future. We will also add new types of connection devices and types of networking.
(Note: Site in in Finnish. If you would like to view the site in English, you can select this option from the page's top navigation menu.)
More Adventures from SciPy: Jenny Qing Qian
Friday, December 5, 2008
By Leslie Hawthorn, Open Source Team
You may recall our recent post from Rachel McCreary detailing her experiences at the 7th Annual Python in Science Conference (a.k.a. SciPy 2008). Also joining Rachel at SciPy was Jenny Qing Qian, one of Rachel's fellow Google Summer of Code™ 2008 students and Pygr developer. Jenny created a Python Ensembl API for her Summer of Code project, and attending SciPy gave her the opportunity to showcase her work and learn more from her co-developers. She was kind enough to send us this report from the conference:
Many thanks to Jenny for sharing her thoughts with us and many congratulations to her and Rachel for their successes this summer!
You may recall our recent post from Rachel McCreary detailing her experiences at the 7th Annual Python in Science Conference (a.k.a. SciPy 2008). Also joining Rachel at SciPy was Jenny Qing Qian, one of Rachel's fellow Google Summer of Code™ 2008 students and Pygr developer. Jenny created a Python Ensembl API for her Summer of Code project, and attending SciPy gave her the opportunity to showcase her work and learn more from her co-developers. She was kind enough to send us this report from the conference:
The introductory tutorials, held during the first two days of the conference, were fantastic. In the tutorial, I was given an excellent hands-on demo of the interactive Python shell – IPython – and other general Python tools and libraries for scientific computing, such as NumPy and SciPy. In addition, I was fascinated by the diversity of plotting tasks the Matplotlib package can perform, tasks which were traditionally carried out using Matlab.
During the conference, I really enjoyed the keynote speech from Alex Martelli, who currently works at Google. His talk addressed the fundamental yet often neglected problems of treating a numeric software package as a 'black box' in the course of scientific and engineering computing. Supported by many vivid real-world examples, he effectively conveyed the message that you must be crystal clear about what you're computing and understand what the 'black boxes' can do and can not do. Otherwise, results may well be far away from being accurate, which can lead to disastrous outcomes, especially in the field of engineering. This is likely due to the fact that the targeted 'black box' is in fact not well-conditioned for the specific tasks or the sets of input data. His talk provided useful input both to users of software packages, but also to their developers. It rightfully prompts the developers to carefully document the behaviors and functionality of their software packages, especially the conditions for using it. In turn, it might help prevent the software being used for other than its intended purposes.
In addition, it was also interesting to listen to talks from various developers about how to apply or further develop general Python or SciPy libraries to solve their domain-specific problems. One of such talks was Summarizing Complexity in High Dimensional Spaces. In this talk, Karl Young presented a very useful method that can provide diagnostic summary information for multi-dimensional and multi-spectral medical image data. This method is developed based on the powerful SciPy array computation capabilities. I have implemented methods in R to analyze high-dimensional biological image data sets like time series analysis of microarray data. In addition, I have also implemented algorithms in Matlab to analyze and classify data with a large number of features, such as documents. Inspired by the talk, I'd love to try to employ SciPy to develop analysis tools for large high-dimensional and multivariate data sets that characterize fundamental properties of dynamic and complex biological systems.
After the conference concluded, I stayed for the coding sprints. My Summer of Code project was about prototyping a database API using standard components of Pygr – a Python Graph Database Framework. The functionality of the API is to retrieve information from a central biological data warehouse, the core Ensembl database system. At the sprint session, I finally got to meet the main Pygr developers, Dr. Christoper Lee (the project's founder) and Dr. Titus Brown.
During the sprint session, both Rachel and I presented and demoed our summer projects to the whole group, and we got some great feedback on our progress to date.
After the presentation, Dr. Lee and I further discussed the potential of best re-using existing Pygr components to further simplify my API framework, so as to make it more maintainable and easier to extend. In addition, we debugged the problems I encountered while porting the Ensembl database schema to pygr.Data namespace. For this project, we had decided to model the complex Ensembl database schemas by employing the strong support from the pygr.Data module. More specifically, rather than implementing a complex database schema in a conventional ORM (Object Relational Mapping ) way, this module transforms a schema into a portable Python namespace. In doing so, we hope to provide API developers as well as end-users with a much cleaner and more intuitive interface to access and distribute the relations among Ensembl data objects. Through a joint effort, I finally managed to save and retrieve typical Ensembl database schemas into and from the pygr.Data namespace! Needless to say, these discussions and my entire SciPy experience left me feeling incredibly motivated to continue working on my project.
Many thanks to Jenny for sharing her thoughts with us and many congratulations to her and Rachel for their successes this summer!
Open Source Jams Head to Belo Horizonte
Wednesday, December 3, 2008
By Gustavo Franco, Systems Administration Team
In mid-November, we held our first first Open Source Jam in Brazil. If you're not sure what the Open Source Jam is — well, it depends on who shows up! It's an open forum for Open Source fans, hackers, and just plain geeks to get together, have a beer, and hear what's going on with one another. If you're looking for people to try out your latest patches or to help you get a project off the ground, it's a great place to start a conversation. You can hang out with all of us at jams in London, Zurich and now Belo Horizonte.
We had more than 30 people at our first jam. Some of them gave lightning talks on various Open Source projects, including Haiku OS, PHPEclipse, the Linux Kernel, Curl FTPFS, Megalinux, Webkit and GTK+. The lightning talks were given in two sessions, and otherwise small groups were discussing ideas and projects. To make sure nobody was thirsty or hungry, we provided free beer and food. You may want to check out our photo album of the event.
Open Source Jams are semi-regular events. To stay informed about the next jam in Belo Horizonte, or to catch up on discussions about previous ones, join the Open Source Jam Brazil Google Group. We hope to see you at our next jam!
In mid-November, we held our first first Open Source Jam in Brazil. If you're not sure what the Open Source Jam is — well, it depends on who shows up! It's an open forum for Open Source fans, hackers, and just plain geeks to get together, have a beer, and hear what's going on with one another. If you're looking for people to try out your latest patches or to help you get a project off the ground, it's a great place to start a conversation. You can hang out with all of us at jams in London, Zurich and now Belo Horizonte.
We had more than 30 people at our first jam. Some of them gave lightning talks on various Open Source projects, including Haiku OS, PHPEclipse, the Linux Kernel, Curl FTPFS, Megalinux, Webkit and GTK+. The lightning talks were given in two sessions, and otherwise small groups were discussing ideas and projects. To make sure nobody was thirsty or hungry, we provided free beer and food. You may want to check out our photo album of the event.
Open Source Jams are semi-regular events. To stay informed about the next jam in Belo Horizonte, or to catch up on discussions about previous ones, join the Open Source Jam Brazil Google Group. We hope to see you at our next jam!
Summer of Coders at SciPy: Rachel McCreary
Tuesday, December 2, 2008
By Leslie Hawthorn, Open Source Team
You never know where an experiment will lead you. When we launched the Google Highly Open Participation Contest™, we weren't sure how many pre-university students would be eager to participate in Open Source development. We weren't sure what kind of work would be most useful to the participating Open Source projects and most compelling to our student contestants. Of course, we were delighted when the first GHOP was a rousing success.
A few weeks later, Titus Brown, the Python Software Foundation's (PSF) GHOP administrator, wrote to let us know of another success story. He'd be mentoring Rachel McCreary for her Google Summer of Code ™ project to improve Pygr, a graph database interface written in Python. Rachel's inspiration to apply for Summer of Code? Her little sister had recently participated in GHOP, working with the PSF.
Rachel did quite well in her project. So well, in fact, that she was invited to attend 7th Annual Python in Science Conference (SciPy 2008), held from August 19 – 24 at Caltech in Pasadena, California, USA, to meet up with her fellow Pygr developers. Rachel was kind enough to send us this report:
Many thanks to Rachel for the report. If you feel like sharing your own Summer of Code or Highly Open Participation Contest success stories, we would love to hear from you. Post a comment and share your joys.
You never know where an experiment will lead you. When we launched the Google Highly Open Participation Contest™, we weren't sure how many pre-university students would be eager to participate in Open Source development. We weren't sure what kind of work would be most useful to the participating Open Source projects and most compelling to our student contestants. Of course, we were delighted when the first GHOP was a rousing success.
A few weeks later, Titus Brown, the Python Software Foundation's (PSF) GHOP administrator, wrote to let us know of another success story. He'd be mentoring Rachel McCreary for her Google Summer of Code ™ project to improve Pygr, a graph database interface written in Python. Rachel's inspiration to apply for Summer of Code? Her little sister had recently participated in GHOP, working with the PSF.
Rachel did quite well in her project. So well, in fact, that she was invited to attend 7th Annual Python in Science Conference (SciPy 2008), held from August 19 – 24 at Caltech in Pasadena, California, USA, to meet up with her fellow Pygr developers. Rachel was kind enough to send us this report:
The conference provided a great opportunity to learn about the various ways Python is used in scientific applications. As a newcomer to this field, I was overwhelmed by the diverse and incredibly active Open Source community. Several of the conference attendees had new and innovative ways to incorporate Python into their work, and I spent the majority of the breaks and lunches learning about the impressive accomplishments of my fellow conference attendees.
Even more exciting than the tutorials were the presentations held on the final two days of the conference. While all were interesting and informative, my personal favorite was the NetworkX presentation. NetworkX is tool that analyzes networks by manipulating basic graph and data structures, and performing numerous computations on the analyses. One of the applications of NetworkX is the prediction of disease outbreaks, and since I am a total epidemiology geek, I was fascinated.
Furthermore, several members of the Pygr project were on hand that week, which provided an ample opportunity for the project team to discuss the successes of my summer project, review code, and plan for the future. It was wonderful to finally put faces to names, and my Google Summer of Code project was presented to the group. As I am the least skilled member of the Pygr clan, I benefited tremendously from observing my fellow developers demonstrate and explain the most efficient ways to improve and utilize Pygr. I plan to continue working on Pygr despite the conclusion of my project, and the sprint helped me to find tasks to focus on in the future.
While SciPy has long been over, the conference had an unexpected impact on me. Once school started back up, as my research advisor assigned me a new bioinformatics computing project, which clearly needs some NumPy love. Luckily, I’ve had just the introduction I need to dive right in!
Many thanks to Rachel for the report. If you feel like sharing your own Summer of Code or Highly Open Participation Contest success stories, we would love to hear from you. Post a comment and share your joys.
Open Source Developers @ Google Speaker Series: Amit Singh
Monday, December 1, 2008
By Leslie Hawthorn, Open Source Team
Amit Singh, Engineering Manager - Mac Development team, will once again be joining us for an update on all things MacFUSE. MacFUSE, an Open Source mechanism that allows you to extend Mac OS X's native file system capabilities, has come along way since its introduction at Macworld 2007. Amit will be sharing all that's new for developers and users in this MacFUSE State of the Union talk. Among other topics, Amit will cover:
Amit will also address MacFUSE best practices and how these can help you write less code that does more, as well as some advanced little known tips and tricks for the system.
You'll also get to see some interesting and unusual file systems never before seen on Mac OS X.
If you are nearby Google's Mountain View, California, USA Headquarters on Monday, December 8th, please join us for Amit's MacFUSE State of the Union. Doors open at 4:30 PM and light refreshments will be served. All are welcome and encouraged to attend; guests should plan to sign in at Building 43 reception upon arrival. For those of you who cannot join us in person, the presentation will be taped and published along with all public Google Tech Talks. We hope to see you there!
For those of you who were unable to attend Amit's May 2007 presentation on MacFUSE, you might want to check out the video.
Amit Singh, Engineering Manager - Mac Development team, will once again be joining us for an update on all things MacFUSE. MacFUSE, an Open Source mechanism that allows you to extend Mac OS X's native file system capabilities, has come along way since its introduction at Macworld 2007. Amit will be sharing all that's new for developers and users in this MacFUSE State of the Union talk. Among other topics, Amit will cover:
- How to leverage new features in MacFUSE to make your file system act more like a native Mac OS X file system
- What the upcoming 64-bit support in MacFUSE means for your file systems
- How to use the new file system templates in MacFUSE to quickly get started on a new file system
- How to choose the best MacFUSE API for your specific needs
- How to use the power of DTrace to debug and analyze your file systems
Amit will also address MacFUSE best practices and how these can help you write less code that does more, as well as some advanced little known tips and tricks for the system.
You'll also get to see some interesting and unusual file systems never before seen on Mac OS X.
If you are nearby Google's Mountain View, California, USA Headquarters on Monday, December 8th, please join us for Amit's MacFUSE State of the Union. Doors open at 4:30 PM and light refreshments will be served. All are welcome and encouraged to attend; guests should plan to sign in at Building 43 reception upon arrival. For those of you who cannot join us in person, the presentation will be taped and published along with all public Google Tech Talks. We hope to see you there!
For those of you who were unable to attend Amit's May 2007 presentation on MacFUSE, you might want to check out the video.
Emoji for Unicode: Open Source Data for the Encoding Proposal
Wednesday, November 26, 2008
By Markus Scherer, Google Internationalization Engineering
Emoji (絵文字), or "picture characters", the graphical versions of :-) and its friends, are widely used and especially popular among Japanese cell phone users. Just last month, they became available in Gmail ― see the team's announcement: A picture is worth a thousand words.
These symbols are encoded as custom (carrier-specific) symbol characters and sent as part of text messages, emails, and web pages. In theory, they are confined to each cell phone carrier's network unless there is an agreement and a converter in place between two carriers. In practice, however, people expect emoji just to work - what they put into a message will get to all the recipients; what they see on a web page will be seen by others; if they search for a character they'll find it. For that to really work well, these symbol characters need to be part of the Unicode Standard (the universal character set used in modern computing).
There are active, on-going efforts to standardize a complete set of emoji as regular symbols characters in Unicode. This involves determining which symbols are already covered in Unicode, and which new symbols would be needed. We're trying to help this effort along by sharing all of our mapping data and tools in the form of the "emoji4unicode" open source project. The goal is more effective collaboration with other members of the Unicode Consortium and review by the cell phone carriers and other interested parties. By making these tools and mappings available, we hope to assist and accelerate the encoding process. Take a look at the documentation, browse the data and tools and let us know what you think.
Emoji (絵文字), or "picture characters", the graphical versions of :-) and its friends, are widely used and especially popular among Japanese cell phone users. Just last month, they became available in Gmail ― see the team's announcement: A picture is worth a thousand words.
These symbols are encoded as custom (carrier-specific) symbol characters and sent as part of text messages, emails, and web pages. In theory, they are confined to each cell phone carrier's network unless there is an agreement and a converter in place between two carriers. In practice, however, people expect emoji just to work - what they put into a message will get to all the recipients; what they see on a web page will be seen by others; if they search for a character they'll find it. For that to really work well, these symbol characters need to be part of the Unicode Standard (the universal character set used in modern computing).
There are active, on-going efforts to standardize a complete set of emoji as regular symbols characters in Unicode. This involves determining which symbols are already covered in Unicode, and which new symbols would be needed. We're trying to help this effort along by sharing all of our mapping data and tools in the form of the "emoji4unicode" open source project. The goal is more effective collaboration with other members of the Unicode Consortium and review by the cell phone carriers and other interested parties. By making these tools and mappings available, we hope to assist and accelerate the encoding process. Take a look at the documentation, browse the data and tools and let us know what you think.
WHOPR - A scalable whole program optimizer for GCC
Thursday, November 20, 2008
By Diego Novillo, Google Compiler Team
Traditional compilation proceeds one file at a time. The compiler optimizes and generates code for each file in isolation and then the final executable is created by linking all the individual files together.
This model of compilation has the advantage that all the files can be compiled concurrently, which greatly reduces build time. This is particularly useful with current multiprocessor machines and applications consisting of hundreds and even thousands of files. However, this model presents a fairly significant barrier for optimization.
Consider this program:
foo.c:
bar.c:
From an optimization perspective, inlining f and g inside foo is likely going to provide significant performance improvements. However, the compiler never sees both files at the same time. When it is compiling foo.c it does not have access to the functions in bar.c and vice-versa. Therefore, inlining will never take place.
One could get around this problem by moving the bodies of f and g to a common header file and declaring them inline. But that is not always desirable or even possible. So, several optimizing compilers introduce a new feature called Link-Time Optimization (LTO) that allows these kinds of cross-file manipulations by the compiler.
The LTO model essentially splits code generation in two major phases
1. Generation of Intermediate Representation (IR). The original source code is parsed as usual and an intermediate representation (IR) for the program is generated. The IR is a succinct representation of the original program, symbols and types. This contains all the information that the compiler needs to generate final code, except that instead of using it to generate final code for each file, the compiler saves it to an intermediate file for later processing.
2. Once the IR has been emitted for all the source files, the intermediate files generated in the previous step are loaded in memory and the whole set is analyzed and optimized at once.
To visualize this process, imagine that you simply concatenated all the source files together and compiled the result. Since the compiler has visibility over every function in every compilation unit, decisions that would normally use conservative estimates can instead be based on data-flow information crossing file boundaries. Additionally, the compiler is able to perform cross-language optimizations that are not possible when the compilation scope is restricted to individual files.
The LTO model of compilation is useful but it has severe scalability issues. A basic implementation has the potential to incur massive memory consumption during compilation. Since every function body in every file may be needed in memory, only relatively small programs will be able to be compiled in whole program mode.
At Google, we deal with several massively large applications, so we are working on a scalable alternative to traditional LTO called WHOPR (WHOle Program optimizeR), which introduces parallelism and distribution to be able to handle arbitrarily large programs. The basic observation is that to do many whole program optimizations, the compiler rarely needs to have all the functions loaded in memory, and final code generation can be parallelized by partitioning the program into independent sets.
WHOPR then proceeds in 3 phases
1. Local Generation (LGEN). This is the same as traditional LTO. Every source file is parsed and its IR saved to disk. This phase is trivially parallelizable using make -j or distcc or any other similar technique.
2. Whole program analysis (WPA). After all the IR files are generated, they are sent to the linker, but in this case the linker will not know what to do with them (there is no object code in them). So, the linker turns around and passes them back to the compiler which will collect summary information from every function in every file. This per-function summary information contains things like number of instructions, symbols accessed, functions called, functions that call it, etc. It is used to decide what optimizations to apply, but no optimizations are applied at this time. The compiler simply decides what to do and partitions the input files into new files that contain the original IR plus an optimization plan for each new file.
3. Local transformations (LTRANS). The new IR files generated by the previous phase are now compiled to object code using the optimization plan decided by WPA. Since each file contains everything needed to apply the optimization plan, it can also proceed in parallel.
This diagram shows the process. The only sequential step during optimization is the WPA phase, which does not need to operate on too much data and it is not computationally expensive. Everything else proceeds in parallel, suitable for multiprocessors or distributed machines.
After all the LTRANS processes are finished, the final object files are returned to the linker and the final executable is generated.
We are currently in the initial stages of implementation. The work is being implemented in the LTO branch in GCC. We expect to have an initial prototype by summer 2009. The branch can currently deal with some applications, but there are the usual rough spots. You can read more information about the project at http://gcc.gnu.org/wiki/whopr
Traditional compilation proceeds one file at a time. The compiler optimizes and generates code for each file in isolation and then the final executable is created by linking all the individual files together.
This model of compilation has the advantage that all the files can be compiled concurrently, which greatly reduces build time. This is particularly useful with current multiprocessor machines and applications consisting of hundreds and even thousands of files. However, this model presents a fairly significant barrier for optimization.
Consider this program:
foo.c:
foo()
{
for (;;) {
...
x += g (f (i, j), f (j, i));
...
}
}
bar.c:
float f(float i, float j)
{
return i * (i - j);
}
float g(float x, float y)
{
return x - y;
}
From an optimization perspective, inlining f and g inside foo is likely going to provide significant performance improvements. However, the compiler never sees both files at the same time. When it is compiling foo.c it does not have access to the functions in bar.c and vice-versa. Therefore, inlining will never take place.
One could get around this problem by moving the bodies of f and g to a common header file and declaring them inline. But that is not always desirable or even possible. So, several optimizing compilers introduce a new feature called Link-Time Optimization (LTO) that allows these kinds of cross-file manipulations by the compiler.
The LTO model essentially splits code generation in two major phases
1. Generation of Intermediate Representation (IR). The original source code is parsed as usual and an intermediate representation (IR) for the program is generated. The IR is a succinct representation of the original program, symbols and types. This contains all the information that the compiler needs to generate final code, except that instead of using it to generate final code for each file, the compiler saves it to an intermediate file for later processing.
2. Once the IR has been emitted for all the source files, the intermediate files generated in the previous step are loaded in memory and the whole set is analyzed and optimized at once.
To visualize this process, imagine that you simply concatenated all the source files together and compiled the result. Since the compiler has visibility over every function in every compilation unit, decisions that would normally use conservative estimates can instead be based on data-flow information crossing file boundaries. Additionally, the compiler is able to perform cross-language optimizations that are not possible when the compilation scope is restricted to individual files.
The LTO model of compilation is useful but it has severe scalability issues. A basic implementation has the potential to incur massive memory consumption during compilation. Since every function body in every file may be needed in memory, only relatively small programs will be able to be compiled in whole program mode.
At Google, we deal with several massively large applications, so we are working on a scalable alternative to traditional LTO called WHOPR (WHOle Program optimizeR), which introduces parallelism and distribution to be able to handle arbitrarily large programs. The basic observation is that to do many whole program optimizations, the compiler rarely needs to have all the functions loaded in memory, and final code generation can be parallelized by partitioning the program into independent sets.
WHOPR then proceeds in 3 phases
1. Local Generation (LGEN). This is the same as traditional LTO. Every source file is parsed and its IR saved to disk. This phase is trivially parallelizable using make -j or distcc or any other similar technique.
2. Whole program analysis (WPA). After all the IR files are generated, they are sent to the linker, but in this case the linker will not know what to do with them (there is no object code in them). So, the linker turns around and passes them back to the compiler which will collect summary information from every function in every file. This per-function summary information contains things like number of instructions, symbols accessed, functions called, functions that call it, etc. It is used to decide what optimizations to apply, but no optimizations are applied at this time. The compiler simply decides what to do and partitions the input files into new files that contain the original IR plus an optimization plan for each new file.
3. Local transformations (LTRANS). The new IR files generated by the previous phase are now compiled to object code using the optimization plan decided by WPA. Since each file contains everything needed to apply the optimization plan, it can also proceed in parallel.
This diagram shows the process. The only sequential step during optimization is the WPA phase, which does not need to operate on too much data and it is not computationally expensive. Everything else proceeds in parallel, suitable for multiprocessors or distributed machines.
After all the LTRANS processes are finished, the final object files are returned to the linker and the final executable is generated.
We are currently in the initial stages of implementation. The work is being implemented in the LTO branch in GCC. We expect to have an initial prototype by summer 2009. The branch can currently deal with some applications, but there are the usual rough spots. You can read more information about the project at http://gcc.gnu.org/wiki/whopr
Serious Geeking Going on in Oxford Over Online Publishing
Thursday, November 13, 2008
By J-P Stacey, OxfordGeeks
On Wednesday 22 October, over a hundred geeks attended the ninth Oxford Geek Night, upstairs at the Jericho Tavern. After the musical theme of the previous OGN, this one had a distinct flavour of online publishing.
Jeremy Ruston of BT Osmosoft demonstrated TiddlyWiki (an open-source wiki application that works offline) and revealed its offshoot Project Cecily, a prototype ZUI (Zooming User Interface). Adrian Hon of Six to Start then explained the ideas and tech behind We Tell Stories, a complex Django-based site of interactive fiction, built for publishers Penguin UK.
Continuing the Django-ish theme, Rami Chowdhury discussed WSGI—the server/application web standard—in one of the more technical microslot talks (five minutes each, from local volunteers). In another, David Sheldon took us through the steps required to hack a CurrentCost electricity meter, to get at the regular XML packets it emits from a serial port.
In the microslot sessions we also covered moving your business mail to Google Mail, protection—or otherwise—against socially engineered virus vectors, and how to use an interlocking stack of Python, Ruby on Rails and Java to crawl the web for comparisons of mobile-phone tariffs. We also had a short talk from the Oxfordshire branch of the British Computing Society about their forthcoming IT-industry events.
As usual, the evening was rounded off by a book raffle, this time courtesy of Pearson Education. Many of the night’s talks—especially the keynotes and the microslot on antivirus protection—had generated heated debate among the geeks in the room, and this carried on for some time after proceedings had officially finished.
The Oxford Geek Nights are free events, thanks to Torchbox and the Google open-source team. But even the generosity of our sponsors couldn’t prevent the upstairs bar staff from tapping their watches, as we all headed downstairs into the main room of the pub to continue arguing.
On Wednesday 22 October, over a hundred geeks attended the ninth Oxford Geek Night, upstairs at the Jericho Tavern. After the musical theme of the previous OGN, this one had a distinct flavour of online publishing.
Jeremy Ruston of BT Osmosoft demonstrated TiddlyWiki (an open-source wiki application that works offline) and revealed its offshoot Project Cecily, a prototype ZUI (Zooming User Interface). Adrian Hon of Six to Start then explained the ideas and tech behind We Tell Stories, a complex Django-based site of interactive fiction, built for publishers Penguin UK.
Continuing the Django-ish theme, Rami Chowdhury discussed WSGI—the server/application web standard—in one of the more technical microslot talks (five minutes each, from local volunteers). In another, David Sheldon took us through the steps required to hack a CurrentCost electricity meter, to get at the regular XML packets it emits from a serial port.
In the microslot sessions we also covered moving your business mail to Google Mail, protection—or otherwise—against socially engineered virus vectors, and how to use an interlocking stack of Python, Ruby on Rails and Java to crawl the web for comparisons of mobile-phone tariffs. We also had a short talk from the Oxfordshire branch of the British Computing Society about their forthcoming IT-industry events.
As usual, the evening was rounded off by a book raffle, this time courtesy of Pearson Education. Many of the night’s talks—especially the keynotes and the microslot on antivirus protection—had generated heated debate among the geeks in the room, and this carried on for some time after proceedings had officially finished.
The Oxford Geek Nights are free events, thanks to Torchbox and the Google open-source team. But even the generosity of our sponsors couldn’t prevent the upstairs bar staff from tapping their watches, as we all headed downstairs into the main room of the pub to continue arguing.
Mixxx's Google Summer of Code 2008 Roundup Report
Wednesday, November 12, 2008
By Albert Santoni, Mixxx Project
Google Summer of Code 2008 has been a great opportunity to bring fresh new talent into the Mixxx development team. For those not familiar with Mixxx, it's software that allows DJs to create live beatmixes. This year, Mixxx was supported by four students, each with a new project to help improve some aspect of Mixxx. As this was our second Summer of Code, we helped plan our students' projects better this year, which led to our students producing more maintainable code with clear paths for integration into our trunk.
Zach Elko worked on session saving and crash recovery. Session saving allows a DJ to save various aspects of Mixxx's state (such as the knob positions) for easy recall later. Early on in his project, Zach realized that a good starting point for a crash recovery system would be to allow Mixxx sessions to be saved and restored. His project's focus was shifted towards creating a rock solid session saving system, and Zach has made significant inroads toward this goal.
Russell Ryan rewrote Mixxx's waveform viewer widget. The waveform viewer widget renders a song's waveform in realtime and scrolls through it as playback proceeds. It also allows a user to seek through a song by dragging the widget. Russell's new waveform widget provides improved performance, better modularity, and is much more extensible than our previous widget. The new waveform viewer was merged into trunk in late July and was featured in our recent 1.6.0 final release.
Tom Care worked on improving Mixxx's support for hardware MIDI controllers, which are popular with DJs. MIDI controllers are hardware control devices that mimic the look and feel of real DJ mixers, and can make mixing much easier. Tom's work has yielded an easy MIDI binding interface so DJs can use any MIDI device with Mixxx, as well as overall improvements to the structure and modularity of our MIDI code.
And finally, Wesley Stessens continued his project to add Shoutcasting capabilities, which he began earlier in the year. Shoutcast support allows DJs to broadcast their mixes live through internet radio stations. Unfortunately, due to personal circumstances Wesley had to leave GSoC at the midterm. We're hopeful that he will rejoin our development team in the future.
By the end of August, the three remaining projects were in good shape. The two projects which haven't yet been merged into trunk have time-lines for being merged, and we're pleased with the outcome of these projects. We'd like to thank Google for their gracious support through Summer of Code this year. It's been a fantastic experience for us, and we're happy that we were able to introduce some students to open source development.
Google Summer of Code 2008 has been a great opportunity to bring fresh new talent into the Mixxx development team. For those not familiar with Mixxx, it's software that allows DJs to create live beatmixes. This year, Mixxx was supported by four students, each with a new project to help improve some aspect of Mixxx. As this was our second Summer of Code, we helped plan our students' projects better this year, which led to our students producing more maintainable code with clear paths for integration into our trunk.
Zach Elko worked on session saving and crash recovery. Session saving allows a DJ to save various aspects of Mixxx's state (such as the knob positions) for easy recall later. Early on in his project, Zach realized that a good starting point for a crash recovery system would be to allow Mixxx sessions to be saved and restored. His project's focus was shifted towards creating a rock solid session saving system, and Zach has made significant inroads toward this goal.
Russell Ryan rewrote Mixxx's waveform viewer widget. The waveform viewer widget renders a song's waveform in realtime and scrolls through it as playback proceeds. It also allows a user to seek through a song by dragging the widget. Russell's new waveform widget provides improved performance, better modularity, and is much more extensible than our previous widget. The new waveform viewer was merged into trunk in late July and was featured in our recent 1.6.0 final release.
Tom Care worked on improving Mixxx's support for hardware MIDI controllers, which are popular with DJs. MIDI controllers are hardware control devices that mimic the look and feel of real DJ mixers, and can make mixing much easier. Tom's work has yielded an easy MIDI binding interface so DJs can use any MIDI device with Mixxx, as well as overall improvements to the structure and modularity of our MIDI code.
And finally, Wesley Stessens continued his project to add Shoutcasting capabilities, which he began earlier in the year. Shoutcast support allows DJs to broadcast their mixes live through internet radio stations. Unfortunately, due to personal circumstances Wesley had to leave GSoC at the midterm. We're hopeful that he will rejoin our development team in the future.
By the end of August, the three remaining projects were in good shape. The two projects which haven't yet been merged into trunk have time-lines for being merged, and we're pleased with the outcome of these projects. We'd like to thank Google for their gracious support through Summer of Code this year. It's been a fantastic experience for us, and we're happy that we were able to introduce some students to open source development.
plop: Probabilistic Learning of Programs
Tuesday, November 11, 2008
By Moshe Looks, Google Research
Cross-posted with the Google Research blog
Traditional machine learning systems work with relatively flat, uniform data representations, such as feature vectors, time-series, and probabilistic context-free grammars. However, reality often presents us with data which are best understood in terms of relations, types, hierarchies, and complex functional forms. The best representational scheme we computer scientists have for coping with this sort of complexity is computer programs. Yet there are comparatively few machine learning methods that operate on programmatic representations, due to the extreme combinatorial explosions involved and the semantic complexities of programs.
The plop project, launched early Monday, November 10th, is unusual in addressing these issues directly - its long-goals are quite ambitious! Plop is being implemented in Common Lisp, an equally unusual programming language that is uniquely suited to constructing and transforming complex programmatic representations.
Cross-posted with the Google Research blog
Traditional machine learning systems work with relatively flat, uniform data representations, such as feature vectors, time-series, and probabilistic context-free grammars. However, reality often presents us with data which are best understood in terms of relations, types, hierarchies, and complex functional forms. The best representational scheme we computer scientists have for coping with this sort of complexity is computer programs. Yet there are comparatively few machine learning methods that operate on programmatic representations, due to the extreme combinatorial explosions involved and the semantic complexities of programs.
The plop project, launched early Monday, November 10th, is unusual in addressing these issues directly - its long-goals are quite ambitious! Plop is being implemented in Common Lisp, an equally unusual programming language that is uniquely suited to constructing and transforming complex programmatic representations.
C++ Standards Meeting Finalizes Feature-Complete Draft Standard
Monday, November 10, 2008
By Matt Austern and Lawrence Crowl, Google
In late September, Google hosted the 44th meeting of the ISO C++ Standards Committee in San Francisco, California. Approximately 50 members from seven countries met six days non-stop to nail down details of the new standard.
The new standard, "C++0x", will be a major upgrade to the language—the first major upgrade since C++ first became an International Standard in 1998. It will include support for concurrent programming, better abstraction power and efficiency, simpler programming, enhanced functional programming, upgraded generic programming, optional garbage collection, significant new library components (including TR1), and many other additions and cleanups. C++0x will still be recognizably the same language as today's C++, and it will be almost 100% compatible, but working programmers will find the new standard a much improved tool for serious application development.
All of the features in C++0x have been on the table for years, but this meeting was the one when the committee finally voted to commit to them. Among the long-awaited features added at this meeting were user-defined literals, symbol attributes, simplified iteration with Python-like for loops, library thread safety, and improved generic programming with "concepts".
This was an unusually busy meeting, and it achieved a major milestone: this is the meeting where the committee voted to advance the draft standard to Committee Draft. Or, in less bureaucratic language, we've shipped our beta. The language is now feature complete. The committee will still fix some bugs before the final version is officially released in 2010, and some features might get tweaked or even dropped, but you shouldn't expect major changes. Interested programmers can try the partial g++ implementation.
In late September, Google hosted the 44th meeting of the ISO C++ Standards Committee in San Francisco, California. Approximately 50 members from seven countries met six days non-stop to nail down details of the new standard.
The new standard, "C++0x", will be a major upgrade to the language—the first major upgrade since C++ first became an International Standard in 1998. It will include support for concurrent programming, better abstraction power and efficiency, simpler programming, enhanced functional programming, upgraded generic programming, optional garbage collection, significant new library components (including TR1), and many other additions and cleanups. C++0x will still be recognizably the same language as today's C++, and it will be almost 100% compatible, but working programmers will find the new standard a much improved tool for serious application development.
All of the features in C++0x have been on the table for years, but this meeting was the one when the committee finally voted to commit to them. Among the long-awaited features added at this meeting were user-defined literals, symbol attributes, simplified iteration with Python-like for loops, library thread safety, and improved generic programming with "concepts".
This was an unusually busy meeting, and it achieved a major milestone: this is the meeting where the committee voted to advance the draft standard to Committee Draft. Or, in less bureaucratic language, we've shipped our beta. The language is now feature complete. The committee will still fix some bugs before the final version is officially released in 2010, and some features might get tweaked or even dropped, but you shouldn't expect major changes. Interested programmers can try the partial g++ implementation.
SWIG's First Google Summer of Code
Friday, November 7, 2008
By William Fulton, SWIG administrator
SWIG is a programmers tool for semi-automating the calls to C or C++ code from almost any other programming language. The idea is to feed C/C++ header files into SWIG and SWIG then generates the 'glue' code so that your C/C++ library can be used from another language such as Python, Java, C#, Ruby, Perl etc. In fact there are implementations for supporting over 20 different of these target languages. The participating students have had a productive summer and have extended the number of languages and features supported in SWIG's first Google Summer of Code™.
Haoyu Bai has added support for the upcoming Python 3 release. Python is the most popular target language amongst SWIG users and no doubt this addition will be much appreciated by those who are thinking of upgrading to Python 3. Also Haoyu has provided new Python 3 features which make coding faster and simpler when using Python extension code. The main features added are function annotations, buffer interfaces and abstract base classes and are outlined in more detail here: Python 3 Support.
Jan Jezabek has added a new 'language' module providing Windows Component Object Model (COM) support. This new module makes it possible for any COM enabled language to easily call into C or C++ libraries. The COM module in SWIG is more powerful than most as it ultimately provides support for more than one language as there are numerous languages that can call into COM libraries. Compiled languages such as Visual Basic and scripting languages, such as JScript, VBA and VBScript that can run on the Windows Scripting Host are probably the most popular to benefit. A great use will be the ease of making C/C++ libraries available in applications supporting the various Basic dialects, such as OpenOffice.org and Microsoft Office. SWIG makes it easy to utilise more advanced C++ code, such as templates, and the COM module is no different here as Jan has added in very comprehensive coverage of the C and C++ languages, full details here: SWIG COM Module.
Maciej Drwal has added a module for calling C++ code from C code. It is now possible to automatically create a flattened API of C++ classes so that the C++ functionality is available in the form of easy to use C structs and global functions. For example, features such as C++ template classes / functions are easily callable from C. One cool part of this project is the graceful handling of C++ exceptions in the calling C code. Some introductory documentation is available here: SWIG C Module.
Cheryl Foil has added an interesting feature to improve code documentation in the target language. This works when C/C++ code is documented using the industry standard Doxygen tool for annotating methods, classes, variables etc. The new feature extracts the Doxygen comments from the code for use by one of the many target languages. Cheryl has added initial support for Java so that the Doxygen comments are turned into JavaDoc comments embedded into the generated Java wrappers, see Doxygen support in SWIG for details.
Lastly, a great big thanks to the other mentors involved in making this happen, Ian Appru, Olly Betts, and Richard Boulton and finally to Google for funding a great programme.
SWIG is a programmers tool for semi-automating the calls to C or C++ code from almost any other programming language. The idea is to feed C/C++ header files into SWIG and SWIG then generates the 'glue' code so that your C/C++ library can be used from another language such as Python, Java, C#, Ruby, Perl etc. In fact there are implementations for supporting over 20 different of these target languages. The participating students have had a productive summer and have extended the number of languages and features supported in SWIG's first Google Summer of Code™.
Haoyu Bai has added support for the upcoming Python 3 release. Python is the most popular target language amongst SWIG users and no doubt this addition will be much appreciated by those who are thinking of upgrading to Python 3. Also Haoyu has provided new Python 3 features which make coding faster and simpler when using Python extension code. The main features added are function annotations, buffer interfaces and abstract base classes and are outlined in more detail here: Python 3 Support.
Jan Jezabek has added a new 'language' module providing Windows Component Object Model (COM) support. This new module makes it possible for any COM enabled language to easily call into C or C++ libraries. The COM module in SWIG is more powerful than most as it ultimately provides support for more than one language as there are numerous languages that can call into COM libraries. Compiled languages such as Visual Basic and scripting languages, such as JScript, VBA and VBScript that can run on the Windows Scripting Host are probably the most popular to benefit. A great use will be the ease of making C/C++ libraries available in applications supporting the various Basic dialects, such as OpenOffice.org and Microsoft Office. SWIG makes it easy to utilise more advanced C++ code, such as templates, and the COM module is no different here as Jan has added in very comprehensive coverage of the C and C++ languages, full details here: SWIG COM Module.
Maciej Drwal has added a module for calling C++ code from C code. It is now possible to automatically create a flattened API of C++ classes so that the C++ functionality is available in the form of easy to use C structs and global functions. For example, features such as C++ template classes / functions are easily callable from C. One cool part of this project is the graceful handling of C++ exceptions in the calling C code. Some introductory documentation is available here: SWIG C Module.
Cheryl Foil has added an interesting feature to improve code documentation in the target language. This works when C/C++ code is documented using the industry standard Doxygen tool for annotating methods, classes, variables etc. The new feature extracts the Doxygen comments from the code for use by one of the many target languages. Cheryl has added initial support for Java so that the Doxygen comments are turned into JavaDoc comments embedded into the generated Java wrappers, see Doxygen support in SWIG for details.
Lastly, a great big thanks to the other mentors involved in making this happen, Ian Appru, Olly Betts, and Richard Boulton and finally to Google for funding a great programme.
OpenCog and GSoC
Thursday, November 6, 2008
By Ben Goertzel, PhD, Director of Research, SIAI
This summer OpenCog was chosen by Google to participate in the Google Summer of Code™ project: Google funded 11 students from around the world to work under the supervision of experienced mentors associated with the OpenCog project, and the associated OpenBiomind project.
OpenCog is a large AI software project with hugely ambitious goals (you can't get much more ambitious than "creating powerful AI at the human level and beyond") and a lot of "moving parts" -- and the most successful OpenCog GSoC projects seemed to be the ones that successfully split off "summer sized chunks" from the whole project, which were meaningful and important in themselves, and yet also formed part of the larger OpenCog endeavor ... moving toward greater and greater general intelligence.
Many of the GSoC projects were outstanding but perhaps the most dramatically successful (in my own personal view) was Filip Maric's project (mentored by Predrag Janicic) which involved pioneering an entirely new approach to natural language parsing technology. The core parsing algorithm of the link parser, a popular open-source English parser (that is used within OpenCog's RelEx language processing subsystem), was replaced with a novel parsing algorithm based on a Boolean satisfaction solver: and the good news is, it actually works ... getting the best parses of a sentence faster than the old, standard parsing algorithm; and, most importantly, providing excellent avenues for future integration of NL parsing with semantic analysis and other aspects of language-utilizing AI systems. This work was very successful but needs a couple more months effort to be fully wrapped up and Filip, after a brief break, has resumed working on it recently and will continue throughout November and December.
Cesar Maracondes, working with Joel Pitt, made a lot of progress on porting the code of the Probabilistic Logic Networks (PLN) probabilistic reasoning system from a proprietary codebase to the open-source OpenCog codebase, resolving numerous software design issues along the way. This work was very important as PLN is a key aspect of OpenCog's long-term AI plans. Along the way Cesar helped with porting OpenCog to MacOS.
There were also two extremely successful projects involving OpenBiomind, a sister project to OpenCog:
* Bhavesh Sanghvi (working with Murilo Queiroz) designed and implemented a Java user interface to the OpenBiomind bioinformatics toolkit, an important step which should greatly increase the appeal of the toolkit within the biological community (not all biologists are willing to use command-line tools, no matter how powerful)
* Paul Cao (working with Lucio Coelho) implemented a new machine learning technique within OpenBiomind, in which recursive feature selection is combined with OpenBiomind's novel "model ensemble based important features analysis." The empirical results on real bio datasets seem good. This is novel scientific research embodied in working open-source code, and should be a real asset to scientists doing biological data analysis.
And the list goes on and on: in this short post I can't come close to doing justice to all that was done, but please see our site for more details!
All in all, we are very grateful to Google for creating the GSoC program and including us in it. Thanks to Google, and most of all to the students and mentors involved.
This summer OpenCog was chosen by Google to participate in the Google Summer of Code™ project: Google funded 11 students from around the world to work under the supervision of experienced mentors associated with the OpenCog project, and the associated OpenBiomind project.
OpenCog is a large AI software project with hugely ambitious goals (you can't get much more ambitious than "creating powerful AI at the human level and beyond") and a lot of "moving parts" -- and the most successful OpenCog GSoC projects seemed to be the ones that successfully split off "summer sized chunks" from the whole project, which were meaningful and important in themselves, and yet also formed part of the larger OpenCog endeavor ... moving toward greater and greater general intelligence.
Many of the GSoC projects were outstanding but perhaps the most dramatically successful (in my own personal view) was Filip Maric's project (mentored by Predrag Janicic) which involved pioneering an entirely new approach to natural language parsing technology. The core parsing algorithm of the link parser, a popular open-source English parser (that is used within OpenCog's RelEx language processing subsystem), was replaced with a novel parsing algorithm based on a Boolean satisfaction solver: and the good news is, it actually works ... getting the best parses of a sentence faster than the old, standard parsing algorithm; and, most importantly, providing excellent avenues for future integration of NL parsing with semantic analysis and other aspects of language-utilizing AI systems. This work was very successful but needs a couple more months effort to be fully wrapped up and Filip, after a brief break, has resumed working on it recently and will continue throughout November and December.
Cesar Maracondes, working with Joel Pitt, made a lot of progress on porting the code of the Probabilistic Logic Networks (PLN) probabilistic reasoning system from a proprietary codebase to the open-source OpenCog codebase, resolving numerous software design issues along the way. This work was very important as PLN is a key aspect of OpenCog's long-term AI plans. Along the way Cesar helped with porting OpenCog to MacOS.
There were also two extremely successful projects involving OpenBiomind, a sister project to OpenCog:
* Bhavesh Sanghvi (working with Murilo Queiroz) designed and implemented a Java user interface to the OpenBiomind bioinformatics toolkit, an important step which should greatly increase the appeal of the toolkit within the biological community (not all biologists are willing to use command-line tools, no matter how powerful)
* Paul Cao (working with Lucio Coelho) implemented a new machine learning technique within OpenBiomind, in which recursive feature selection is combined with OpenBiomind's novel "model ensemble based important features analysis." The empirical results on real bio datasets seem good. This is novel scientific research embodied in working open-source code, and should be a real asset to scientists doing biological data analysis.
And the list goes on and on: in this short post I can't come close to doing justice to all that was done, but please see our site for more details!
All in all, we are very grateful to Google for creating the GSoC program and including us in it. Thanks to Google, and most of all to the students and mentors involved.
Nmap's Fourth GSoC: Success Stories and Lessons Learned
Wednesday, November 5, 2008
By Gordon "Fyodor" Lyon, author of the Nmap Project and GSoC Mentor
The Nmap Security Scanner Project was honored to participate in our fourth Google Summer of Code(tm)! The pencils-down date was two months ago, but so much code was produced that we're just now finishing the integration process. I finally have time to reflect on these last four years, what GSoC has brought us, and the lessons it has taught us.
In 2005 (detailed writeup), 70% (7 out of 10) students succeeded, and they tackled some wonderful projects! This year we begin work on our new Zenmap GUI (then named Umit), Ncat network communication utility, and 2nd generation OS detection system. Doug Hoyte first made major contributions that summer, and continues helping to this day. I was the mentor for all 10 students, and I had them all send me patches rather than providing SVN access. Nmap didn't even have a public SVN tree back then.
In 2006 (full writeup), I had a better idea of what works and what doesn't and was able to improve the success rate to 80% (8 out of 10). Perhaps the most exciting project was the Nmap Scripting Engine (NSE), which has become one of Nmap's most compelling features. It allows users to write (and share) simple scripts to automate a wide variety of networking tasks. We also finished and integrated the 2nd generation OS detection system, and Zenmap (Umit) continued to improve. I again mentored the students myself without providing SVN access.
In 2007 (full writeup), our success rate grew again to 83% (5 of 6)! I attribute part of the success to me being less of a control freak. For example, I took only 4 students compared to 10 the previous year. The remaining two 2006 students were mentored by Diman Todorov, who created NSE as a 2006 SoC student. I also made the Nmap SVN server public and provided commit access to the students. This year we formally integrated Zenmap into the Nmap build system and packages, making massive improvements along the way. This Summer also introduced David Fifield to the Nmap project and was the first SoC for Kris Katterjohn. Both of them have been prolific developers ever since then.
Enough with the history—let's take a look at our 2008 results! I'm happy to report that we had an 86% (6 out of 7) success rate. In other words, our success rate has increased every single year! I like to credit improved processes and interaction based on what we've learned before, but it also helps that we invite the best students back in later years. We've never had a 2nd year (or more) student fail. This year we expanded to three mentors, all of whom (except for me) were former SoC students. Now let's look in detail at our 2008 SoC accomplishments:
In addition to these core Nmap projects, 5 students were sponsored to work on the UMIT Nmap GUI (now a separate project led by Adriano Marques). Four of their five students passed, as described here.
Please join me in congratulating all these students for their excellent work! I'm particularly pleased that many of the SoC students have continued contributing even though the summer has ended. I'm looking forward to GSoC 2009 (assuming it is held again and they invite us), but 2008 will be a tough year to top!
The Nmap Security Scanner Project was honored to participate in our fourth Google Summer of Code(tm)! The pencils-down date was two months ago, but so much code was produced that we're just now finishing the integration process. I finally have time to reflect on these last four years, what GSoC has brought us, and the lessons it has taught us.
In 2005 (detailed writeup), 70% (7 out of 10) students succeeded, and they tackled some wonderful projects! This year we begin work on our new Zenmap GUI (then named Umit), Ncat network communication utility, and 2nd generation OS detection system. Doug Hoyte first made major contributions that summer, and continues helping to this day. I was the mentor for all 10 students, and I had them all send me patches rather than providing SVN access. Nmap didn't even have a public SVN tree back then.
In 2006 (full writeup), I had a better idea of what works and what doesn't and was able to improve the success rate to 80% (8 out of 10). Perhaps the most exciting project was the Nmap Scripting Engine (NSE), which has become one of Nmap's most compelling features. It allows users to write (and share) simple scripts to automate a wide variety of networking tasks. We also finished and integrated the 2nd generation OS detection system, and Zenmap (Umit) continued to improve. I again mentored the students myself without providing SVN access.
In 2007 (full writeup), our success rate grew again to 83% (5 of 6)! I attribute part of the success to me being less of a control freak. For example, I took only 4 students compared to 10 the previous year. The remaining two 2006 students were mentored by Diman Todorov, who created NSE as a 2006 SoC student. I also made the Nmap SVN server public and provided commit access to the students. This year we formally integrated Zenmap into the Nmap build system and packages, making massive improvements along the way. This Summer also introduced David Fifield to the Nmap project and was the first SoC for Kris Katterjohn. Both of them have been prolific developers ever since then.
Enough with the history—let's take a look at our 2008 results! I'm happy to report that we had an 86% (6 out of 7) success rate. In other words, our success rate has increased every single year! I like to credit improved processes and interaction based on what we've learned before, but it also helps that we invite the best students back in later years. We've never had a 2nd year (or more) student fail. This year we expanded to three mentors, all of whom (except for me) were former SoC students. Now let's look in detail at our 2008 SoC accomplishments:
- Vladimir Mitrovic spent the summer improving the Zenmap GUI, under David Fifield's expert mentorship. They made huge usability and stability improvements, but the pinnacle of their summer achievement was clearly the scan aggregation and topology features! Scan aggregation allows you to conduct multiple scans at different times and add them seamlessly to your existing results. Topology draws a beautiful interactive diagram like this of the discovered network:
- Jurand Nogiec also worked with David on Zenmap, and was responsible for many key UI improvements which now seem obvious in hindsight. For example, he added a cancel button for aborting a scan in progress without clearing the Nmap output, and he added context-sensitive help to the many dozens of options in the Profile Editor. He also made numerous improvements to the command entry field for people who like to type Nmap command directly, while still benefiting from Zenmap's visual and searchable presentation of results.
- Patrick Donnelly made substantial NSE infrastructure improvements. He added mutex support and an NSE Standard Library, fixed some serious bugs, and rewrote and optimized a substantial amount of code (particularly the nse_init system). But his crowning accomplishment was the NSEDoc system, which uses special comments and variables in script and library code to generate a comprehensive documentation portal.
- Kris Katterjohn, who already had hundreds of useful Nmap patches to his name, returned for 2008 to write hundreds more! There is no way I can list everything he did here, particularly as his contributions ranged all over the map from writing NSE libraries (such as the username/password module and the standardized communication library) to improving Windows support (adding IPv6 and OpenSSL). His biggest project has been finishing up Ncat, our advanced Netcat replacement (which began as a 2005 SoC project by Chris Gibson). Ncat is now integrated with Nmap in our latest SVN revision.
- Michael Pattrick was David's third student, and he accomplished a wide variety of tasks. For example, he created a new OSAssist application for testing and integrating the thousands of Nmap OS detection submissions sent in by Nmap users all over the world. With OSAssist, integration is more accurate and much less tedious. Michael also built two prototypes (one in Perl and then another in C++) for an Ndiff application which compares two or more scan output files and prints out any changes. The prototypes proved so popular that David wrote a final version in Python which is now integrated with Nmap in our latest SVN revision.
- Philip Pickering spent the summer working on NSE scripts and libraries. We've already incorporated his libraries for binary data manipulation, DNS queries, Base64 encoding, SNMP, POP3, and cryptographic hashes. We've also incorporated several scripts he wrote utilizing these new libraries.
In addition to these core Nmap projects, 5 students were sponsored to work on the UMIT Nmap GUI (now a separate project led by Adriano Marques). Four of their five students passed, as described here.
Please join me in congratulating all these students for their excellent work! I'm particularly pleased that many of the SoC students have continued contributing even though the summer has ended. I'm looking forward to GSoC 2009 (assuming it is held again and they invite us), but 2008 will be a tough year to top!
GitTogether '08
Tuesday, November 4, 2008
By Shawn Pearce, Google Open Source Programs Office and Git contributor
Last week Google played host to the first Git developer conference at its Mountain View headquarters. The 3-day conference was well attended, with almost 25 major contributors and users coming out to discuss the past and future of the Git distributed version control system.
Several major topics were presented, leading to some highly interesting new topics starting on the Git mailing list. A true Git library is now being planned, to provide native bindings into scripting languages such as Perl and Python. Major user interface improvements to git send-email and the overall user experience were also introduced and are well under way. A Google Tech Talk, Contributing With Git, was also given by Johannes Schindelin, and is now available to the public on YouTube.
More details about the sessions, including slides and notes, are available on the git wiki.
A big thanks to Google for supporting open source projects by offering meeting space for the conference attendees.
Last week Google played host to the first Git developer conference at its Mountain View headquarters. The 3-day conference was well attended, with almost 25 major contributors and users coming out to discuss the past and future of the Git distributed version control system.
Several major topics were presented, leading to some highly interesting new topics starting on the Git mailing list. A true Git library is now being planned, to provide native bindings into scripting languages such as Perl and Python. Major user interface improvements to git send-email and the overall user experience were also introduced and are well under way. A Google Tech Talk, Contributing With Git, was also given by Johannes Schindelin, and is now available to the public on YouTube.
More details about the sessions, including slides and notes, are available on the git wiki.
A big thanks to Google for supporting open source projects by offering meeting space for the conference attendees.
Gerrit and Repo, the Android Source Management Tools
Monday, November 3, 2008
By Jeff Bailey, Open Source Team
A couple weeks ago, we announced the Android open source release. Beside it, we silently released the tools that we wrote in order to make handling a large multi-repository project manageable in git. If you had a chance to look through the Android open source website, you'll notice references to a tool called repo. Why did we write this? With approximately 8.5 million lines of code (not including things like the Linux Kernel!), keeping this all in one git tree would've been problematic for a few reasons:
* We want to delineate access control based on location in the tree.
* We want to be able to make some components replaceable at a later date.
* We needed trivial overlays for OEMs and other projects who either aren't ready or aren't able to embrace open source.
* We don't want our most technical people to spend their time as patch monkeys.
The repo tool uses an XML-based manifest file describing where the upstream repositories are, and how to merge them into a single working checkout. repo will recurse across all the git subtrees and handle uploads, pulls, and other needed items. repo has built-in knowledge of topic branches and makes working with them an essential part of the workflow.
The gerrit code review tool is based off of rietveld. Gerrit is itself split into two components: Half that runs on Google App Engine to provide front-end web service, and half that runs on a machine to handle attempted merges into the "upstream" branch, and the various code review branches. When we integrate the auto-builders into the system, that will also be handled by Gerrit.
We have a workflow diagram that shows how code gets into the system for Android. If you're looking to switch to git, but don't want to lose the ability for multiple people to commit into an upstream tree, this is one solution for you to consider. Interested? Find us at [email protected]
A couple weeks ago, we announced the Android open source release. Beside it, we silently released the tools that we wrote in order to make handling a large multi-repository project manageable in git. If you had a chance to look through the Android open source website, you'll notice references to a tool called repo. Why did we write this? With approximately 8.5 million lines of code (not including things like the Linux Kernel!), keeping this all in one git tree would've been problematic for a few reasons:
* We want to delineate access control based on location in the tree.
* We want to be able to make some components replaceable at a later date.
* We needed trivial overlays for OEMs and other projects who either aren't ready or aren't able to embrace open source.
* We don't want our most technical people to spend their time as patch monkeys.
The repo tool uses an XML-based manifest file describing where the upstream repositories are, and how to merge them into a single working checkout. repo will recurse across all the git subtrees and handle uploads, pulls, and other needed items. repo has built-in knowledge of topic branches and makes working with them an essential part of the workflow.
The gerrit code review tool is based off of rietveld. Gerrit is itself split into two components: Half that runs on Google App Engine to provide front-end web service, and half that runs on a machine to handle attempted merges into the "upstream" branch, and the various code review branches. When we integrate the auto-builders into the system, that will also be handled by Gerrit.
We have a workflow diagram that shows how code gets into the system for Android. If you're looking to switch to git, but don't want to lose the ability for multiple people to commit into an upstream tree, this is one solution for you to consider. Interested? Find us at [email protected]
Pardus' Google Summer of Code Experience
Thursday, October 30, 2008
By Faik Yalcin Uygur, Pardus Google Summer of Code Organization Administrator
For Pardus' first year in Google Summer of Code™, it was not a surprise for us that most of our applications were from Turkey, since Pardus is the most well known Linux distribution in our country. But as nearly every review about the project mentions, we are working on our global awareness, and we hope to get more international applications in the coming years.
This year we had 17 student applications and 5 students were accepted to the program; four of them completed their projects successfully.
Cihangir Besiktas, worked on adding Internet sharing capability to Pardus' network manager application. The project's aim was to make an Internet connected box to act as a gateway to its internal network so that other boxes in the network can connect to Internet. By only selecting the interface that is connected to Internet and the interface that Internet is going to be shared to, everything can be done automatically by the network manager. All the work done by Cihangir has been integrated into the network manager and is now part of the latest release of Pardus. Cihangir kept a blog about his project and documented his work.
Isbaran Akcayir, worked on adding 802.1x support to Pardus' network manager application. 802.1x provides authentication to devices attached to a LAN port and it is based on Extensible Authentication Protocol. Although it is possible to connect to the network with wpa_supplicant package from the console, Isbaran added a frontend into Pardus' network manager for easy configuration and connection to 802.1x networks. The work done by Isbaran is integrated into network manager and now is part of the latest release of Pardus.
Mehmet Ozan Kabak, worked on a common notification manager to be used by Pardus' manager applications. This project was inspired by the Growl application for Mac. Mehmet successfully completed his project which has become a qt4 based, skinnable notification management system working on dbus. He kept a blog while developing and documented his project. The latest release of Pardus is KDE3 based, so it is not possible right now to integrate Mehmet's work. But with the next release of Pardus, hopefully it will.
Türker Sezer, worked on an easy to use wizard base Pardus CD/DVD/USB distribution media creator GUI application. Pardus does not provide a package selection screen in its installation program YALI. So his project would allow anyone to create a customized Pardus distribution. He completed his project successfully Also while developing his own project, he helped us to fix our live CD creation problems in our own application. He is going to be working on his project. After fixing some layout and usability problems, he is going to package his application and it will become installable from Pardus repositories.
Our first year was beneficial for us and we hope also for our students. Congratulations to all of them and their mentors!
For Pardus' first year in Google Summer of Code™, it was not a surprise for us that most of our applications were from Turkey, since Pardus is the most well known Linux distribution in our country. But as nearly every review about the project mentions, we are working on our global awareness, and we hope to get more international applications in the coming years.
This year we had 17 student applications and 5 students were accepted to the program; four of them completed their projects successfully.
Cihangir Besiktas, worked on adding Internet sharing capability to Pardus' network manager application. The project's aim was to make an Internet connected box to act as a gateway to its internal network so that other boxes in the network can connect to Internet. By only selecting the interface that is connected to Internet and the interface that Internet is going to be shared to, everything can be done automatically by the network manager. All the work done by Cihangir has been integrated into the network manager and is now part of the latest release of Pardus. Cihangir kept a blog about his project and documented his work.
Isbaran Akcayir, worked on adding 802.1x support to Pardus' network manager application. 802.1x provides authentication to devices attached to a LAN port and it is based on Extensible Authentication Protocol. Although it is possible to connect to the network with wpa_supplicant package from the console, Isbaran added a frontend into Pardus' network manager for easy configuration and connection to 802.1x networks. The work done by Isbaran is integrated into network manager and now is part of the latest release of Pardus.
Mehmet Ozan Kabak, worked on a common notification manager to be used by Pardus' manager applications. This project was inspired by the Growl application for Mac. Mehmet successfully completed his project which has become a qt4 based, skinnable notification management system working on dbus. He kept a blog while developing and documented his project. The latest release of Pardus is KDE3 based, so it is not possible right now to integrate Mehmet's work. But with the next release of Pardus, hopefully it will.
Türker Sezer, worked on an easy to use wizard base Pardus CD/DVD/USB distribution media creator GUI application. Pardus does not provide a package selection screen in its installation program YALI. So his project would allow anyone to create a customized Pardus distribution. He completed his project successfully Also while developing his own project, he helped us to fix our live CD creation problems in our own application. He is going to be working on his project. After fixing some layout and usability problems, he is going to package his application and it will become installable from Pardus repositories.
Our first year was beneficial for us and we hope also for our students. Congratulations to all of them and their mentors!
Gallery's First Sprint
Wednesday, October 29, 2008
By Chris Kelly, Gallery Project Manager
Last week, Google's Open Source Team hosted the Gallery project's first team sprint. Ten core team members, some from their offices at Google and some from as far away as Serbia, got together on the Google campus on October 22-24 to figure out the future of the Gallery project.
During the weeks prior to the sprint, the Gallery community embarked on some ambitious discussions about what we could do if we took advantage of new technology. We evaluated various PHP frameworks by implementing a basic UI in each one, reviewed feature lists, and examined as many available options as possible. Combined with usability work driven by Jakob Hilden that originated with the OpenUsability project's Season of Usability this year, these discussions and explorations paved the way for the sprint: major decisions and the beginning of a rewrite!
Once at Google we spent a lot of time discussing options, tinkering with code, and continuing discussions into the evening at bars and restaurants in Mountain View.
By Friday, we settled on code standards, feature lists, a new project management methodology using trackers on SourceForge and a shared task list in Chandler, and the Kohana PHP framework. We didn't quite finish the code yet, but it's all in our SourceForge Subversion repository in a temporary location, and we look forward to introducing Gallery 3 to the world in a few months.
More pictures and more details will be available on gallery.menalto.com later this week.
Last week, Google's Open Source Team hosted the Gallery project's first team sprint. Ten core team members, some from their offices at Google and some from as far away as Serbia, got together on the Google campus on October 22-24 to figure out the future of the Gallery project.
During the weeks prior to the sprint, the Gallery community embarked on some ambitious discussions about what we could do if we took advantage of new technology. We evaluated various PHP frameworks by implementing a basic UI in each one, reviewed feature lists, and examined as many available options as possible. Combined with usability work driven by Jakob Hilden that originated with the OpenUsability project's Season of Usability this year, these discussions and explorations paved the way for the sprint: major decisions and the beginning of a rewrite!
Once at Google we spent a lot of time discussing options, tinkering with code, and continuing discussions into the evening at bars and restaurants in Mountain View.
By Friday, we settled on code standards, feature lists, a new project management methodology using trackers on SourceForge and a shared task list in Chandler, and the Kohana PHP framework. We didn't quite finish the code yet, but it's all in our SourceForge Subversion repository in a temporary location, and we look forward to introducing Gallery 3 to the world in a few months.
More pictures and more details will be available on gallery.menalto.com later this week.
First Android Patch Accepted!
Tuesday, October 21, 2008
By Jeff Bailey, Open Source Team
This morning at 8 AM Pacific, I had the joy of participating in the Android release. If you've been following along, you'll have seen how excited we've been - and are - to publish millions of lines of code to the outside world.
Well, the number just went up by six more lines. It's a small start, but knowing that we accepted our first patch from a contributor external to the Open Handset Alliance just 4.5 hours after unveiling the code reinforces to me why open sourcing this is exactly the right thing to do.
Happy Hacking. =)
(Update: I just checked, and we're up to 5 accepted patches from 8 submitted. Way cool.)
This morning at 8 AM Pacific, I had the joy of participating in the Android release. If you've been following along, you'll have seen how excited we've been - and are - to publish millions of lines of code to the outside world.
Well, the number just went up by six more lines. It's a small start, but knowing that we accepted our first patch from a contributor external to the Open Handset Alliance just 4.5 hours after unveiling the code reinforces to me why open sourcing this is exactly the right thing to do.
Happy Hacking. =)
(Update: I just checked, and we're up to 5 accepted patches from 8 submitted. Way cool.)
Android: The Open Source Cell Phone
By Chris DiBona, Open Source Team
As you might have heard, on the 22nd of October we will start to see the first deliveries of the T-Mobile G1, the first phone based on the Android mobile platform. This is an incredibly exciting time for us, the culmination of over three years of work done by hundreds of people at the companies that make up the Open Handset Alliance. All of us are waiting with bated breath to see how the phone is used and what its impact will be on the future of mobile phones and computing.
But that's tomorrow....
Today I'm very proud to announce that we are releasing the code that went into that same revolutionary device. Let me present Android: the first complete and highly functional, mass market, Open Source mobile platform. Built with and on top of a bunch of Open Source software, this is one of the largest releases in the history of FOSS. Our goal was to make millions of terrific phones possible, to raise the bar on what people can expect from any mobile phone and to release the code that makes it possible.
So check out the code, build a device, send in some patches and become a committer.
Android is terrific now, and with every new developer that joins us Android gets better. Not just for the Open Handset Alliance, not just for Google, and not just for T-Mobile G1 users — but for everyone. Through the use of Open Source we can change how the world thinks of cell phones and portable computing, together.
If you'd like to know more, visit the Android open source page and read all about its debut there.
As you might have heard, on the 22nd of October we will start to see the first deliveries of the T-Mobile G1, the first phone based on the Android mobile platform. This is an incredibly exciting time for us, the culmination of over three years of work done by hundreds of people at the companies that make up the Open Handset Alliance. All of us are waiting with bated breath to see how the phone is used and what its impact will be on the future of mobile phones and computing.
But that's tomorrow....
Today I'm very proud to announce that we are releasing the code that went into that same revolutionary device. Let me present Android: the first complete and highly functional, mass market, Open Source mobile platform. Built with and on top of a bunch of Open Source software, this is one of the largest releases in the history of FOSS. Our goal was to make millions of terrific phones possible, to raise the bar on what people can expect from any mobile phone and to release the code that makes it possible.
So check out the code, build a device, send in some patches and become a committer.
Android is terrific now, and with every new developer that joins us Android gets better. Not just for the Open Handset Alliance, not just for Google, and not just for T-Mobile G1 users — but for everyone. Through the use of Open Source we can change how the world thinks of cell phones and portable computing, together.
If you'd like to know more, visit the Android open source page and read all about its debut there.
Thousand Parsec and Google Summer of Code
Friday, October 17, 2008
By Tim Ansell, Technical Solutions Engineering Team and Thousand Parsec Project Co-Founder
This was the second year that Thousand Parsec partook in the Google Summer of Code™, and we accomplished even more than we did in our very successful first year. For those who don't know, Thousand Parsec is a framework for building turn based space empire building games. Many different types of rulesets can be developed which have a wide variety of features.
In 2008, we had 8 students, all of whom successfully completed their projects. Together they made a massive contribution to our code base, writing more than 130K lines of code across 5 different modules. This year we were also pleased to see a great deal more collaboration and interaction between our students and Thousand Parsec's wider community.
One of the most exciting projects to come from Summer of Code 2008 is our new 3D client. This takes our existing libraries and couples them with the sweet Python bindings for Ogre 3D(another 2008 mentoring organization) and builds a rich client full of eye candy. Since the completion of the Summer of Code, Eugene Tan has been hard at work to make his first release happen, and plans are on track for him to do so this week. Check out these screenshots for a preview:
Our primary server also got a workout, with 3 students working hard on improving its functionality. All our students work has been merged into mainline and will be in our next release (which is also being preped at this very moment). Ryan Neufeld and Dustin White both added new "quick play" rulesets, while Aaron Mavrinac added ability to remotely configure the server. This gives people a choice of 4 different games to play, 3 of which were developed as Summer of Code projects.
Our prototype and backup server also got some love with Juan Lafont contributing a quick play game of his own creation called "DroneSec". This ruleset required that he also improve many of the server's features and he is in the process of preparing a release.
Aaron, who initially worked on creating the remote configuration of tpserver-cpp, has also been working hard on adding single player support. His work touched and improved all our of modules and even other students' projects. Aaron is currently driving the next release of our primary client, which will include a wizard leting anyone setup a local game including the server, AI opponents and other options.
Two students, Victor Ivri and Vincent Verhoeven, each worked on creating AI frameworks and testing them out on the new rulesets developed this year. Having two frameworks allows us to continually refine their abilities and skills, giving people the ability to play non-trivial game
scenarios without having to find human opponents.
Zhang Chiyuan's project focused on a completely different tack: adding support for Schemepy to Thousand Parsec. His project allows Scheme to be used from the Python framework. Zhang completely rewrote the existing backends and added a bunch of new backends. In the process, he created a extensive compliance suite which allows for quick checking to ensure our backends are functioning correctly. He has also ported our Python client and servers and to the new interfaces.
Overall, we're very proud of all our students' work, all of which has made a dramatic impact on the health and usefulness of Thousand Parsec. Of course, the entire community hopes they continue to contribute in the future. We would like to thank the Google Open Source Team for all their efforts in running such an awesome program.
Finally, congratulations to all of our mentors and students for their many
accomplishments!
This was the second year that Thousand Parsec partook in the Google Summer of Code™, and we accomplished even more than we did in our very successful first year. For those who don't know, Thousand Parsec is a framework for building turn based space empire building games. Many different types of rulesets can be developed which have a wide variety of features.
In 2008, we had 8 students, all of whom successfully completed their projects. Together they made a massive contribution to our code base, writing more than 130K lines of code across 5 different modules. This year we were also pleased to see a great deal more collaboration and interaction between our students and Thousand Parsec's wider community.
One of the most exciting projects to come from Summer of Code 2008 is our new 3D client. This takes our existing libraries and couples them with the sweet Python bindings for Ogre 3D(another 2008 mentoring organization) and builds a rich client full of eye candy. Since the completion of the Summer of Code, Eugene Tan has been hard at work to make his first release happen, and plans are on track for him to do so this week. Check out these screenshots for a preview:
Our primary server also got a workout, with 3 students working hard on improving its functionality. All our students work has been merged into mainline and will be in our next release (which is also being preped at this very moment). Ryan Neufeld and Dustin White both added new "quick play" rulesets, while Aaron Mavrinac added ability to remotely configure the server. This gives people a choice of 4 different games to play, 3 of which were developed as Summer of Code projects.
Our prototype and backup server also got some love with Juan Lafont contributing a quick play game of his own creation called "DroneSec". This ruleset required that he also improve many of the server's features and he is in the process of preparing a release.
Aaron, who initially worked on creating the remote configuration of tpserver-cpp, has also been working hard on adding single player support. His work touched and improved all our of modules and even other students' projects. Aaron is currently driving the next release of our primary client, which will include a wizard leting anyone setup a local game including the server, AI opponents and other options.
Two students, Victor Ivri and Vincent Verhoeven, each worked on creating AI frameworks and testing them out on the new rulesets developed this year. Having two frameworks allows us to continually refine their abilities and skills, giving people the ability to play non-trivial game
scenarios without having to find human opponents.
Zhang Chiyuan's project focused on a completely different tack: adding support for Schemepy to Thousand Parsec. His project allows Scheme to be used from the Python framework. Zhang completely rewrote the existing backends and added a bunch of new backends. In the process, he created a extensive compliance suite which allows for quick checking to ensure our backends are functioning correctly. He has also ported our Python client and servers and to the new interfaces.
Overall, we're very proud of all our students' work, all of which has made a dramatic impact on the health and usefulness of Thousand Parsec. Of course, the entire community hopes they continue to contribute in the future. We would like to thank the Google Open Source Team for all their efforts in running such an awesome program.
Finally, congratulations to all of our mentors and students for their many
accomplishments!
Zurich Open Source Jam 5
Wednesday, October 15, 2008
By Nóirín Shirley, Technical Writing Team
Google Zurich has been a hive of activity lately — with Code Jam, Googler for a Day, and now the latest Open Source Jam.
Our lightning talks were far too interesting and informative to confine to five minutes, but with snacks and beer in hand, noone seemed to mind.
Thomas Koch kicked off the evening, with a talk on using Vim as your IDE. Peter Arrenbrecht gave us an introduction to Patch Branches, and followed up with a quick demo during our break.
Paolo Bonzini told us all about his favourite project — GNU Smalltalk — and sparked a bit of a discussion on making a living from Open Source work. That kept us entertained while our experts worked to sort out some "technical issues" with the projector!
Once we got things up and running again, Gabriel Petrovay gave us a demo of XQuery support in Eclipse, using XQDT and Zorba. And, to round off the evening, Simon Leinen snagged himself a cool Google T-shirt by giving a talk about OSS vs "The Cloud"!
If you're still not sure what the Open Source Jam is — well, it depends on who shows up! It's an open forum for open source fans, hackers, and just plain geeks to get together, have a beer, and hear what's going on. And if you're looking for people to try out your latest patches, or to help you get a project off the ground, it's a great place to start a conversation. This time around, we had geeks from Germany and Italy, as well as plenty of locals — so there's no excuse!
Google Zurich has been a hive of activity lately — with Code Jam, Googler for a Day, and now the latest Open Source Jam.
Our lightning talks were far too interesting and informative to confine to five minutes, but with snacks and beer in hand, noone seemed to mind.
Thomas Koch kicked off the evening, with a talk on using Vim as your IDE. Peter Arrenbrecht gave us an introduction to Patch Branches, and followed up with a quick demo during our break.
Paolo Bonzini told us all about his favourite project — GNU Smalltalk — and sparked a bit of a discussion on making a living from Open Source work. That kept us entertained while our experts worked to sort out some "technical issues" with the projector!
Once we got things up and running again, Gabriel Petrovay gave us a demo of XQuery support in Eclipse, using XQDT and Zorba. And, to round off the evening, Simon Leinen snagged himself a cool Google T-shirt by giving a talk about OSS vs "The Cloud"!
If you're still not sure what the Open Source Jam is — well, it depends on who shows up! It's an open forum for open source fans, hackers, and just plain geeks to get together, have a beer, and hear what's going on. And if you're looking for people to try out your latest patches, or to help you get a project off the ground, it's a great place to start a conversation. This time around, we had geeks from Germany and Italy, as well as plenty of locals — so there's no excuse!
But if you missed out on this one, don't worry — the Zurich Open Source Jams are semi-regular events. To stay informed about the details of the next one, or to catch up on discussions about previous ones, join the Open Source Jam Zurich Google Group.
Hackystat's First Google Summer of Code
Tuesday, October 14, 2008
By Philip Johnson, Hackystat Project Administrator
The Hackystat Project had a great Google Summer of Code™ 2008 experience. Four students started the program, and while one had to drop due to sudden illness, the other three went on to successfully complete their projects. Shaoxuan Zhang worked on a Wicket-based user interface to Hackystat, and implemented a number of new features including support for "portfolio" analyses. Matthew Bassett developed a Hackystat sensor for Microsoft Team Foundation Server. Eva Wong developed a Hackystat sensor data visualization package using Flare. You can learn more about each student's experience from their own perspective by reading Matthew's, Shaoxuan's, and Eva's blogs. Finally, my blog contains a reflection on Summer of Code from a first time administrator's perspective. We definitely hope to be back next year, and encourage all Open Source communities to participate.
Congratulations to our mentors and students for their many successes this year!
The Hackystat Project had a great Google Summer of Code™ 2008 experience. Four students started the program, and while one had to drop due to sudden illness, the other three went on to successfully complete their projects. Shaoxuan Zhang worked on a Wicket-based user interface to Hackystat, and implemented a number of new features including support for "portfolio" analyses. Matthew Bassett developed a Hackystat sensor for Microsoft Team Foundation Server. Eva Wong developed a Hackystat sensor data visualization package using Flare. You can learn more about each student's experience from their own perspective by reading Matthew's, Shaoxuan's, and Eva's blogs. Finally, my blog contains a reflection on Summer of Code from a first time administrator's perspective. We definitely hope to be back next year, and encourage all Open Source communities to participate.
Congratulations to our mentors and students for their many successes this year!