tag:blogger.com,1999:blog-88823899593944023292025-02-15T06:52:59.282-05:00Chris LalancetteChrishttp://www.blogger.com/profile/12412311785503355784[email protected]Blogger63125tag:blogger.com,1999:blog-8882389959394402329.post-7412064754315617622015-06-26T15:14:00.002-04:002015-06-26T15:14:48.669-04:00Oz 0.14.0 ReleaseAll,
I'm pleased to announce release 0.14.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user. Release 0.14.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.13.0 and 0.14.0 are:
Fix a bug in checksum checking (this should work again)
Add a global lock around pool refresh; should get rid of a user-visible failure
Support for Debian 8
Support for Ubuntu 15.04
Support for Fedora 22
Support for installing aarch64 guests
Support for installing POWER guests
Support for install arm 32-bit guests
A tarball and zipfile of this release is available on the Github releases page: https://github.com/clalancette/oz/releases . Packages for Rawhide, Fedora-21, Fedora-22, EPEL-7, and EPEL-6 have been built in Koji and will eventually make their way to stable. Instructions on how to get and use Oz are available at http://github.com/clalancette/oz/wiki .
If you have questions or comments about Oz, please feel free to contact me at clalancette at gmail.com, or open up an issue on the github page: http://github.com/clalancette/oz/issues .
Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]6tag:blogger.com,1999:blog-8882389959394402329.post-76906441358582279742015-03-07T15:43:00.000-05:002015-03-07T15:43:34.686-05:00Oz 0.13.0 releaseI'm pleased to announce release 0.13.0 of Oz. Oz is a program
for doing automated installation of guest operating systems with
limited input from the user. Release 0.13.0 is a bugfix and feature
release for Oz. Some of the highlights between Oz 0.12.0 and 0.13.0
are:
<ul>
<li>For Fedora, if the user specifies a version, but that isn't
supported yet, try the last supported version (in this case), as that
will often work.</li>
<li>Fix a regression where we forgot to force the qcow2 image type</li>
<li>Allow installs that use more than one installation device</li>
<li>Add support for RHEL 6.5</li>
<li>Rename OEL-6 to OL-6</li>
<li>Add support for Ubuntu 14.04</li>
<li>Add Windows 8.1 support</li>
<li>Add CentOS-7 support</li>
<li>Add the ability to specify kernel parameters in the TDL</li>
<li>Make sure to remove dhcp leases from guests after the install</li>
<li>Fix support for FreeBSD</li>
<li>Add in support for TDL "precommands"; these are commands that are
run *before* package installation</li>
<li>Fix up file locking</li>
<li>Add support for RHEL 5.11</li>
<li>Remove Ubuntu ssh keys at the end of installation</li>
<li>Add support for Ubuntu 14.10</li>
<li>Add support for XInclude, for merging various TDLs together</li>
<li>Add Fedora 21 support</li>
<li>Add support for ppc64 and ppc64le</li>
</ul>
A tarball and zipfile of this release is available on the Github
releases page: <a href="https://github.com/clalancette/oz/releases">https://github.com/clalancette/oz/releases</a>. Packages
for Fedora-20, Fedora-21, Fedora-22, EPEL-6, and EPEL-7 have been
built in Koji and will eventually make their way to stable.
Instructions on how to get and
use Oz are available at <a href="http://github.com/clalancette/oz/wiki">http://github.com/clalancette/oz/wiki</a>.
If you have questions or comments about Oz, please feel free to
contact me at clalancette at gmail.com, or open up an issue on the
github page: <a href="http://github.com/clalancette/oz/issues">http://github.com/clalancette/oz/issues</a>.
Thanks to everyone who contributed to this release through bug reports,
patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]1tag:blogger.com,1999:blog-8882389959394402329.post-1309004952052501442014-01-15T18:33:00.000-05:002014-01-15T18:33:57.881-05:00Developing STM32 microcontroller code on Linux (Part 7 of 8, building and running a simple STM32 program)The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. The sixth post covered linker scripts and command-line options necessary for building and linking programs to run on the STM32. This post will cover building and running a program on the STM32.
<br><br>
In the previous posts we dealt with all of the set up necessary to build programs for the STM32. It is finally time to take advantage of all of those tools and build and run something. Recall that from previous posts, we already have an OpenOCD configuration file setup, a linker script setup, and a Makefile setup. All that really remains is for us to write the code, build it, and flash it to our device. The code below is very STM32F3DISCOVERY specific; that is, it very much requires that the GPIO for the LED be on GPIO bank E, pin 12 on the board. If you have one of the other STM32 DISCOVERY boards, you'll need to look at the schematics and find one of the GPIOs that are hooked to an LED.
<br><br>
We are going to take an extremely simple example from libopencm3. This example does nothing more than blink one of the LEDs on the board on and off continuously. While this is simple, it will validate that everything that we've done before is actually correct.
<br><br>Here is the code:
<pre><code>
$ cd ~/stm32-project
$ cat <<EOF > tut.c
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>
static void gpio_setup(void)
{
/* Enable GPIOE clock. */
rcc_peripheral_enable_clock(&RCC_AHBENR, RCC_AHBENR_IOPEEN);
/* Set GPIO12 (in GPIO port E) to 'output push-pull'. */
gpio_mode_setup(GPIOE, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE,
GPIO12);
}
int main(void)
{
int i;
gpio_setup();
/* Blink the LED (PC8) on the board. */
while (1) {
/* Using API function gpio_toggle(): */
gpio_toggle(GPIOE, GPIO12); /* LED on/off */
for (i = 0; i < 2000000; i++) /* Wait a bit. */
__asm__("nop");
}
return 0;
}
EOF
</code></pre>
You should now be able to type "make", and the thing should build. Typing "make flash" should run OpenOCD, install the program to the board, and start blinking an LED. Remember that our Makefile required sudo access to actually run openocd. If you don't have sudo access, you can either add sudo access (by adding your user to the wheel group), or just su to root and run the openocd command by hand.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]17tag:blogger.com,1999:blog-8882389959394402329.post-63236577412283343642014-01-13T21:03:00.000-05:002014-01-13T21:03:37.787-05:00Developing STM32 microcontroller code on Linux (Part 6 of 8, building and linking STM32 programs)The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. This post will cover linker scripts and command-line options necessary for building and linking programs to run on the STM32.
<br><br>
Once we have all of the previous steps done, we are achingly close to being able to build and run code on our target STM32 processor. However, there is one more set of low-level details that we have to understand before we can get there. Those details revolve around how our C code gets turned into machine code, and how that code is laid out in memory.
<br><br>
As you may know, compiling code to run on a target is roughly a two-step process:
<ol>
<li>Turn C/C++ code into machine code the target processor understands. The output of this step are what are known as object files.</li>
<li>Take the object files and link them together to form a coherent binary. The output of this step is generally an ELF file.</li>
</ol>
Let's talk about these two steps in more detail.
<br><br>
<h3>Compile step</h3>
During compilation, the compiler parses the C/C++ code and turns it into an object file. A little more concretely, what we want to have our cross-compiler do is to take our C code, turn it into ARM instructions that can run on the STM32, and then output that into object files.
<br><br>
To do this, we use our cross-compiler. As with any version of gcc, there are many flags that can be passed to our cross-compiler, and they can have many effects on the code that is output. What I'm going to present here is a set of flags that I've found works pretty well. This isn't necessarily optimal in any dimension, but will at least serve as a starting point for our code. I'll also point out that this is where we start to get into the differences between the various STM32F* processors. For instance, the STM32F4 processor has an FPU, while the STM32F3 does not. This will affect the flags that we will pass to the compiler.
<br><br>
For the STM32F3, Cortex-M3 processor that I am using, here are the compiler flags:
<code>
-Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD
</code><br>
Let's go through each of them. The -W* flags tell the compiler to generate compile-time warnings for several classes of common errors. I find that enabling these warnings and getting rid of them usually makes the code much better. The -g flag tells the compiler to include debugging symbols in the binary; this makes the code easier to debug, at the expense of some code space. The -fno-common flag tells gcc to place uninitialized global variables into the data section of the binary, which improves performance a bit. The -mcpu=cortex-m3 flag tells the compiler that we have a Cortex-M3, and thus to generate code optimized for the Cortex-M3. The -mthumb flag tells gcc to generate ARM thumb code, which is smaller and more compact than full ARM code. The -mfloat-abi=hard flag tells gcc that we want to use a hard float ABI; this doesn't make a huge difference on a processor without an FPU, but is a good habit to get into. Finally, the -MD flag tells gcc to generate dependency files while compiling, which is useful for Makefiles.
<h3>Linking step</h3>
Once all of the individual files have been compiled, they are put together into the final binary by the linker. This is more complicated when targeting an embedded platform vs. a regular program. In particular, we have to tell the linker not only which files to link together, but also <b>how</b> to lay the resulting binary out on flash and in memory.<br><br>
We'll first start by talking about the flags that we need to pass to the linker to make this work. Here are the set of flags we are going to start with:
<code>
--static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map
</code><br>
Again, let's go through each of them. The --static flag tells the linker to link a static, not a dynamically linked, binary. This flag probably isn't strictly necessary in this case, but we add it anyway. The -lc flag tells the linker to link this binary against the C library, which is newlib in our case. That gives us access to various convenient functions, such as printf(), scanf(), etc. The -lnosys flag tells the linker to link this binary against the "nosys" library. Several of the convenience functions in the C library require underlying implementations of certain functions to operate, such as _write() for printf(). Since we don't have a POSIX operating system that can provide these for us, the nosys library provides empty stub functions for these. If we want, we can later on define our own versions of these stub functions that will get used instead. The -T tut.ld flag tells the linker to use tut.ld as the linker script; we'll talk more about linker scripts below. The -nostartfiles flag tells the linker not to use standard system startup files. Since we don't have an OS here, we can't rely on the standard OS utilities to start our program up. The -Wl,--gc-sections flag tells the linker to garbage collect unused sections. That is, any sections that are not referenced are removed from the resulting binary, which can shrink the binary. The -mcpu=cortex-m3, -mthumb, and -mfloat-abi=hard flags have the same meaning as for the compile flags. The -lm flag tells the linker to link this binary against the math library. It isn't strictly required for our little programs, but most programs want it sooner or later. Finally, the -Wl,-Map=tut.map tells the linker to generate a map file and stick it into tut.map. The map file is helpful for debugging, but is informational only.
<h3>Linker script</h3>
As mentioned before, the linker script tells the linker how to lay out the resulting binary in memory. This script is highly chip specific. The details have to do with where the processor jumps to on reset, and where it expects certain things to be. Note that most chips are actually configurable (based on some jumper settings), so where it jumps to on reset can change. Luckily, for most off-the-shelf STM32 designs, including the DISCOVERY boards, it is always configured to expect the code to start out in flash. Therefore, the linker script tells the linker to lay out the code in flash, but to put the data and bss in RAM.<br><br>With all that said, libopencm3 actually makes this easy on you. They have default linker scripts for each of the chips that are supported. All you really need to do is to fill in a small linker script with the RAM and FLASH size of your chip, include the default libopencm3 one, and away you go.
<br><br>
So we are going to put all of the above together and write a Makefile and a linker script into the project directory we created in the last tutorial. Neither of these are necessarily the best examples of what to do, but they will get the job done. First the Makefile:
<br><br><code>
$ cd ~/stm32-project<br>
$ cat <<EOF > Makefile<br>
CC=arm-none-eabi-gcc<br>
LD=\$(CC)<br>
OBJCOPY=arm-none-eabi-objcopy<br>
OPENOCD=~/opt/cross/bin/openocd<br>
CFLAGS=-Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD -DSTM32F3<br>
LDFLAGS=--static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map<br>
OBJS=tut.o<br>
<br>
all: tut.bin<br>
<br>
tut.bin: tut.elf<br>
$( echo -e "\t" )\$(OBJCOPY) -Obinary tut.elf tut.bin<br>
<br>
tut.elf: \$(OBJS)<br>
$( echo -e "\t" )\$(CC) -o tut.elf \$(OBJS) ~/opt/cross/arm-none-eabi/lib/libopencm3_stm32f3.a --static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map<br>
<br>
flash: tut.bin<br>
$( echo -e "\t" )sudo \$(OPENOCD) -f stm32-openocd.cfg -c "init" -c "reset init" -c "flash write_image erase tut.bin 0x08000000" -c "reset run" -c "shutdown"<br>
<br>
clean:<br>
$( echo -e "\t")rm -f *.elf *.bin *.list *.map *.o *.d *~<br>
EOF<br>
</code>
You should notice a couple of things in the Makefile. First, we use all of the compiler and linker flags that we talked about earlier. Second, our objects ($OBJS) are tut.c, which we'll create in the next post. And third, we have a flash target that will build the project and flash it onto the target processor. This requires the OpenOCD configuration file that we created a couple of posts ago.<br><br>Now the linker script:
<pre><code>
$ cat <<EOF > tut.ld
MEMORY
{
rom (rx) : ORIGIN = 0x08000000, LENGTH = 256K
ram (rwx) : ORIGIN = 0x20000000, LENGTH = 40K
}
/* Include the common ld script. */
INCLUDE libopencm3_stm32f3.ld
EOF
</code></pre>
You'll notice that there isn't a lot here. We just have to define the RAM location and size, and the ROM (Flash) location and size, and the default libopencm3 linker script will take care of the rest.<br><br>
We now have all of the parts in place. The next post will write, compile, and run a simple program on the board.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]3tag:blogger.com,1999:blog-8882389959394402329.post-72018428032854324012014-01-10T18:20:00.000-05:002014-01-10T18:20:00.762-05:00Developing STM32 microcontroller code on Linux (Part 5 of 8, building libopencm3)The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. This post will cover building the device library, libopencm3.
<br><br>
As mentioned in the introductory post, it makes our life a lot easier if we use a device library. This is a library that abstracts the low-level details of the hardware registers away from us, and gives us a nice consistent API to use. While ST provides one these directly, it is not open-source (or more specifically, it's open-source status is murky). Luckily there is libopencm3, an open-source re-implementation that is also a better library in my opinion. As usual, I'm going to compile a certain version of libopencm3; newer or later versions may or may not work better for you.
<br><br>
As before, we start out by exporting some environment variables:
<pre><code>
$ export TOPDIR=~/cross-src
$ export TARGET=arm-none-eabi
$ export PREFIX=~/opt/cross
$ export BUILDPROCS=$( getconf _NPROCESSORS_ONLN )
$ export PATH=$PREFIX/bin:$PATH
</code></pre>
The TOPDIR environment variable is the directory in which the sources are stored. The TARGET environment variable is the architecture that we want our compiler to emit code for. For ARM chips without an operating system (like the STM32), we want arm-none-eabi. The PREFIX environment variable is the location we want our cross-compile tools to end up in; feel free to change this to something more suitable. The BUILDPROCS environment variable is the number of processors that we can use; we will use all of them while building to substantially speed up the build process. Finally, we need to add the location of the cross-compile binaries to our PATH so that later building stages can find it.
<br><br>
Now that we have our environment set up, we can get the code. Note that unlike most of the other tools covered in this tutorial, libopencm3 does not do releases. They expect (more specifically, require) that you clone the latest version and use that. That's what we are going to do here. As of this writing, the latest libopencm3 git hash tag is a909b5ca9e18f802e3caef19e63d38861662c128. Since the libopencm3 developers don't guarantee API stability, all of the steps below will assume the API as of that hash tag. If you decide to use a newer version of libopencm3, you may have to update the example code I give you to conform to the new API. With that out of the way, let's get it:
<pre><code>
$ sudo yum install git
$ cd $TOPDIR
$ git clone git://github.com/libopencm3/libopencm3.git
$ cd libopencm3
$ git checkout -b clalancette-tutorial \
a909b5ca9e18f802e3caef19e63d38861662c128
</code></pre>
What we've done here is to clone the repository, then checkout a new branch with the head at hash a909b5ca9e18f802e3caef19e63d38861662c128. This ensures that even if the library moves forward in the future, we will always use that hash tag for the purposes of this tutorial. Next we build the library:
<pre><code>
$ unset PREFIX
$ make DETECT_TOOLCHAIN=1
$ make DETECT_TOOLCHAIN=1 install
$ export PREFIX=~/opt/cross
</pre></code>
Here we need to unset PREFIX because libopencm3 uses PREFIX for the toolchain name prefix (arm-none-eabi), <b>not</b> the path prefix. Once we've done that, we can tell libopencm3 to detect the toolchain, and then use it to build libopencm3. Finally we use the install target to install the headers and the static libraries (.a files) to our toolchain. Assuming this is successful, everything necessary should be in ~/opt/cross/arm-none-eabi/, with the libraries in lib/libopencm3* and the header files in include/libopencm3. Note that there is one .a file per chip that is supported by libopencm3; we'll return to this later when we start building code for our chip.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]6tag:blogger.com,1999:blog-8882389959394402329.post-91497029081360112592014-01-09T18:26:00.000-05:002014-01-09T18:26:56.798-05:00Developing STM32 microcontroller code on Linux (Part 4 of 8, building openocd)The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. This post is going to cover building OpenOCD for your development environment.
<br><br>
As mentioned in the introductory post, we need OpenOCD so we can take binaries that we build and upload them onto the STM32. OpenOCD is a highly configurable tool and understands a number of different protocols. For our purposes, we really only need it to understand STLinkV2, which is what the STM32 uses. Also note that unlike previous posts, this post does not need or build a cross-compiled tool. That's because OpenOCD itself runs on our development machine, so we just need to do a normal compile. As before, I'm going to compile a certain version of OpenOCD (0.7.0). Newer or older versions may work, but your mileage may vary.
<br><br>
As before, we start out by exporting some environment variables:
<pre><code>
$ export TOPDIR=~/cross-src
$ export TARGET=arm-none-eabi
$ export PREFIX=~/opt/cross
$ export BUILDPROCS=$( getconf _NPROCESSORS_ONLN )
$ export PATH=$PREFIX/bin:$PATH
</code></pre>
The TOPDIR environment variable is the directory in which the sources are stored. The TARGET environment variable is the architecture that we want our compiler to emit code for. For ARM chips without an operating system (like the STM32), we want arm-none-eabi. The PREFIX environment variable is the location we want our cross-compile tools to end up in; feel free to change this to something more suitable. The BUILDPROCS environment variable is the number of processors that we can use; we will use all of them while building to substantially speed up the build process. Finally, we need to add the location of the cross-compile binaries to our PATH so that later building stages can find it.
<br><br>
Now we are ready to start. Let's fetch openocd:
<pre><code>
$ cd $TOPDIR
$ wget http://downloads.sourceforge.net/project/openocd/\
openocd/0.7.0/openocd-0.7.0.tar.gz
</code></pre>
To start the compile, we first need to install a dependency:
<pre><code>
$ sudo yum install libusbx-devel
</code></pre>
Now let's unpack and build openocd:
<pre><code>
$ tar -xvf openocd-0.7.0.tar.gz
$ cd openocd-0.7.0
$ ./configure --enable-stlink --prefix=$PREFIX
$ make
$ make install
</code></pre>
Here we are unpacking, configuring, building, and installing OpenOCD. The configure flags require a bit of explanation. The --enable-stlink flag means to enable support for STLink and STLinkV2, which is what we need for this board. The --prefix flag tells the build system to install OpenOCD to our ~/opt/cross location. This isn't strictly correct; this isn't a cross compile tool. However, it is convenient to have everything in one place, so we install it there.
<br><br>
Assuming everything went properly, we should now have a openocd binary in ~/opt/cross/bin. There will also be a bunch of configuration files installed to ~/opt/cross/share/openocd. These are important as these are pre-canned configuration files provided by OpenOCD. While it is possible to create your own from scratch, the syntax is baroque and it is a lot more work than you would think. Luckily OpenOCD already comes with scripts for STLinkV2 and STM32, so we'll just use those.
<br><br>
In order to have a working configuration, we are going to start creating our "project" directory. This is where the code that eventually runs on the STM32 is going to be placed. I'm going to call my directory ~/stm32-project; feel free to change it for your project. So we do:
<pre><code>
$ mkdir ~/stm32-project
$ cd ~/stm32-project
$ cat <<EOF > stm32-openocd.cfg
source [find interface/stlink-v2.cfg]
source [find target/stm32f3x_stlink.cfg]
reset_config srst_only srst_nogate
EOF
</code></pre>
Here we create the project directory, cd into it, and then create the configuration file for OpenOCD. The configuration file deserves a bit of explanation. First, we tell it to "find" the stlink-v2.cfg configuration file. Where it looks depends on the PREFIX we configured, so in our case it is going to look through ~/opt/cross/share/openocd for that file (where it should find it). Next we tell OpenOCD to "find" the stm32f3x_stlink.cfg file. Again, that file is located in ~/opt/cross/share/openocd, and it again should find it. Note that if you have a different STM32 chip, you should substitute f3x with whatever version of the chip you have. Finally there is a line about reset_config srst_only, and srst_nogate. I honestly don't know what those are for, though they seem to be necessary.
<br><br>
That's it for OpenOCD. Everything should be built, configured, and ready to go.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]1tag:blogger.com,1999:blog-8882389959394402329.post-60433624318455491882014-01-08T22:00:00.001-05:002014-01-08T22:00:15.988-05:00Release of ruby-libvirt 0.5.2This is a release notification for ruby-libvirt 0.5.2. ruby-libvirt is a ruby wrapper around the libvirt API. The changelog between 0.5.1 and 0.5.2 is:
<ul>
<li>Fix to make sure we don't free more entries than retrieved (potential crash)</li>
</ul>
Version 0.5.2 is available from http://libvirt.org/ruby:<br>
<br>
Tarball: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.2.tgz">http://libvirt.org/ruby/download/ruby-libvirt-0.5.2.tgz</a><br>
Gem: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.2.gem">http://libvirt.org/ruby/download/ruby-libvirt-0.5.2.gem</a><br>
<br>
It is also available from rubygems.org; to get the latest version, run:
<br><br>
<code>
$ gem install ruby-libvirt
</code>
<br><br>
As usual, if you run into questions, problems, or bugs, please feel free to
mail me (clalancette@gmail com) and the libvirt mailing list.
<br><br>
Thanks to Guido Günther for the patch to fix this problem.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-37228824692144323282014-01-08T21:06:00.000-05:002014-01-08T21:06:27.597-05:00Developing STM32 microcontroller code on Linux (Part 3 of 8, building gdb)The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. This post is going to cover how to build a debugger for the STM32.
<br><br>
Building a debugger isn't strictly necessary for developing on the STM32. However it can make certain debugging tasks easier, and it is relatively simple to do, so we'll do it here. As with the tools in the last post, the version of gdb used (7.6) worked for me. Your mileage may vary. If you fail to cross-compile gdb, then try a slightly newer or older version and try again on your development setup. If you can't build gdb, you can safely skip this step, though you may run into some problems later.
<br><br>
To build gdb, we'll assume you installed the tools to the path from the last post. If you changed path, you'll have to edit the PREFIX path below.
<br><br>
As before, we start out by exporting some environment variables:
<pre><code>
$ export TOPDIR=~/cross-src
$ export TARGET=arm-none-eabi
$ export PREFIX=~/opt/cross
$ export BUILDPROCS=$( getconf _NPROCESSORS_ONLN )
$ export PATH=$PREFIX/bin:$PATH
</code></pre>
The TOPDIR environment variable is the directory in which the sources are stored. The TARGET environment variable is the architecture that we want our compiler to emit code for. For ARM chips without an operating system (like the STM32), we want arm-none-eabi. The PREFIX environment variable is the location we want our cross-compile tools to end up in; feel free to change this to something more suitable. The BUILDPROCS environment variable is the number of processors that we can use; we will use all of them while building to substantially speed up the build process. Finally, we need to add the location of the cross-compile binaries to our PATH so that later building stages can find it.
<br><br>
Next we'll download, unpack, and build gdb:
<pre><code>
$ cd $TOPDIR
$ wget ftp://ftp.gnu.org/gnu/gdb/gdb-7.6.tar.gz
$ tar -xvf gdb-7.6.tar.gz
$ mkdir build-gdb
$ cd build-gdb
$ ../gdb-7.6/configure --target=$TARGET --prefix=$PREFIX \
--enable-interwork
$ make -j$BUILDPROCS
$ make install
</code></pre>
We download gdb, unpack it, then configure and build it. The flags to configure deserve some explanation. The --target flag tells gdb what target you want the tools to build for; that is, what kind of code will be emitted by the code. In our case, we want ARM with no operating system. The --prefix flag tells gdb that we want our debugger to be installed to $PREFIX. The --enable-interwork flag allows binutils to emit a combination of ARM and THUMB code; if you don't know what that is, don't worry about it for now. Assuming this step went fine on your development machine, there should be a binary in ~/opt/cross/bin (or whatever your top-level output directory is) called arm-none-eabi-gdb.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-1176409957494585452014-01-07T20:58:00.000-05:002014-01-30T09:15:02.597-05:00Developing STM32 microcontroller code on Linux (Part 2 of 8, building the cross-compiler)The first post of this series covered the steps to build and run code for the STM32. This post is going to cover how to build a cross-compiler for the STM32.<br><br>
The steps to build a cross-compiler are somewhat covered <a href="http://kunen.org/uC/gnu_tool.html">here</a> and <a href="http://wiki.osdev.org/GCC_Cross-Compiler">here</a>. In theory, building a cross-compiler is a pretty straightforward process:
<ol>
<li>Cross compile binutils, to get things like as (assembler), ld (linker), nm (list object symbols), etc.</li>
<li>Cross compile gcc, which gives you a C and C++ compiler.</li>
<li>Cross compile newlib, which gives you a minimal libc-like environment to program in.</li>
</ol>
However, there is a big gotcha. Not all combinations of binutils, gcc, and newlib work together. Worse, not all combinations of them build on all development environments, which can make this something of a frustrating experience. For instance, it is known that binutils < 2.24 does not build on machines with texinfo 5.x or later. Thus, on modern machines (like Fedora 19), you <b>must</b> use binutils 2.24 or later. Also, I found that the latest newlib of this writing (2.1.0) does not build on Fedora 19. Your mileage may vary, and this will almost certainly change in the future; the best advice I can give is to start with the latest versions of the packages and then slowly back off the ones that fail until you get a relatively recent combination that works. For the purposes of this post, I ended up using binutils 2.24, gcc 4.8.2, and newlib 2.0.0. This combination builds just fine on Fedora 19.
<br><br>
Now onto the steps needed to build the cross compiling environment. We first need to make sure certain tools are installed. We'll install the development tools through yum:
<pre><code>
$ sudo yum install gcc make tar wget bzip2 gmp-devel \
mpfr-devel libmpc-devel gcc-c++ texinfo ncurses-devel
</code></pre>
Next we fetch the relevant versions of the packages:
<pre><code>
$ mkdir ~/cross-src
$ cd ~/cross-src
$ wget ftp://ftp.gnu.org/gnu/binutils/binutils-2.24.tar.gz
$ wget ftp://ftp.gnu.org/gnu/gcc/gcc-4.8.2/gcc-4.8.2.tar.bz2
$ wget ftp://sources.redhat.com/pub/newlib/newlib-2.0.0.tar.gz
</code></pre>
Next we set some environment variables. This isn't strictly necessary, but will help us reduce errors in the following steps:
<pre><code>
$ export TOPDIR=~/cross-src
$ export TARGET=arm-none-eabi
$ export PREFIX=~/opt/cross
$ export BUILDPROCS=$( getconf _NPROCESSORS_ONLN )
$ export PATH=$PREFIX/bin:$PATH
</code></pre>
The TOPDIR environment variable is the directory in which the sources are stored. The TARGET environment variable is the architecture that we want our compiler to emit code for. For ARM chips without an operating system (like the STM32), we want arm-none-eabi. The PREFIX environment variable is the location we want our cross-compile tools to end up in; feel free to change this to something more suitable. The BUILDPROCS environment variable is the number of processors that we can use; we will use all of them while building to substantially speed up the build process. Finally, we need to add the location of the cross-compile binaries to our PATH so that later building stages can find it.
<br><br>
Now we can start building. We first need to build binutils:
<pre><code>
$ cd $TOPDIR
$ tar -xvf binutils-2.24.tar.gz
$ mkdir build-binutils
$ cd build-binutils
$ ../binutils-2.24/configure --target=$TARGET --prefix=$PREFIX \
--enable-interwork --disable-nls
$ make -j$BUILDPROCS
$ make install
</code></pre>
Basically we are unpacking binutils, doing an out-of-tree build (recommended), and then installing it. The flags to configure deserve some explanation. The --target flag tells binutils what target you want the tools to build for; that is, what kind of code will be emitted by the code. In our case, we want ARM with no operating system. The --prefix flag tells binutils that we want our tools to be installed to $PREFIX. The --enable-interwork flag allows binutils to emit a combination of ARM and THUMB code; if you don't know what that is, don't worry about it for now. Finally, the --disable-nls flag tells binutils not to build translation files, which speeds up the build. Assuming this step went fine on your development machine, there should be a set of tools in ~/opt/cross/bin (or whatever your top-level output directory is) called arm-none-eabi-*. If this didn't work, then you might want to try a newer or older version of binutils; you can't proceed any further without this working.
<br><br>
With binutils built, we can now move on to gcc:
<pre><code>
$ cd $TOPDIR
$ tar -xvf newlib-2.0.0.tar.gz
$ tar -xvf gcc-4.8.2.tar.bz2
$ mkdir build-gcc
$ cd build-gcc
$ ../gcc-4.8.2/configure --target=$TARGET --prefix=$PREFIX \
--enable-interwork --disable-nls --enable-languages="c,c++" \
--without-headers --with-newlib \
--with-headers=$TOPDIR/newlib-2.0.0/newlib/libc/include
$ make -j$BUILDPROCS all-gcc
$ make install-gcc
</code></pre>
Here we are unpacking gcc and newlib (which is required for building gcc), doing an out-of-tree build of the initial part of gcc, and then installing it. The flags to configure deserve some explanation. The --target flag tells gcc what target you want the tools to emit code for. The --prefix flag tells gcc that we want our tools to be installed to $PREFIX. The --enable-interwork flag allows gcc to emit a combination of ARM and THUMB code. The --disable-nls flag tells gcc not to build translation files, which speeds up the build. The --enable-languages flag tells gcc which compilers we want it to build; in our case, both the C and C++ compilers. The --without-headers --with-newlib and --with-headers flags tells gcc that it not to use internal headers, but rather to use newlib and the headers from newlib. Assuming this step finished successfully, there should be a file called ~/opt/cross/bin/arm-none-eabi-gcc, which is the initial compiler. Again, if it didn't work, then you might want to try a newer or older version of gcc; you can't proceed any further without this.
<br><br>
With the initial compiler built, we can now build newlib:
<pre><code>
$ cd $TOPDIR
$ mkdir build-newlib
$ cd build-newlib
$ ../newlib-2.0.0/configure --target=$TARGET --prefix=$PREFIX \
--enable-interwork
$ make -j$BUILDPROCS
$ make install
</code></pre>
Since we've already unpacked newlib, we skip that step. Here we are doing an out-of-tree build of newlib, using the compiler that we built in the last step. The configure flags have the same meaning as previously.
<br><br>
With newlib built, we can now go back and finish the build of gcc (the last step!):
<pre><code>
$ cd $TOPDIR/build-gcc
$ make -j$BUILDPROCS
$ make install
</code></pre>
This finishes the gcc build, and installs it to $PREFIX.
That's it! You should now have a $PREFIX directory full of tools and headers useful for building code to run on the STM32.
<br><br>
Update Jan 8, 2014: Updated the formatting so it is more readable.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]13tag:blogger.com,1999:blog-8882389959394402329.post-34331750321920946472014-01-06T22:39:00.000-05:002014-01-06T22:39:21.994-05:00Developing STM32 microcontroller code on Linux (Part 1 of 8, introduction)Recently I've been playing with the STM32, which is a small microcontroller made by ST. These seem to be pretty great microcontroller chips; they are relatively fast (depending on what model you get), have a decent amount of flash (up to 1MB), and have a decent amount of memory (up to 192KB). It is also easy to get development boards for them; there is a line of boards called the STM32DISCOVERY boards that are really cheap and easy to get. It is possible to work on these chips entirely with open source tools, which is important to me.
<br><br>
This series of posts will go through all of the steps necessary to develop on these boards. Note that all of this is covered elsewhere on the web, but a lot of the information is either outdated or scattered. I'll build all of the pieces from the ground up to get a working set of tools and binaries that you can use to develop your own STM32 applications.
<br><br>
To start with, I'm going to describe my hardware setup. I have a laptop running <a href="http://fedoraproject.org">Fedora</a> 19 x86_64. This is my main development machine, and this is going to be the host for everything I do with the STM32. For an STM32 board, I have an STM32F3DISCOVERY board, as shown <a href="http://www.st.com/stm32f3discovery">here</a>. However, note that for most of the posts, the exact board that you have isn't that important. As long as it is one of the STM32F*DISCOVERY boards, the steps below will mostly apply. The differences will become more important when we start to actually write code that deals with the GPIOs (as the GPIOs differ per board), but for the development environment they are really all quite similar.
<br><br>
This series of posts will do the steps in the following order:
<ol>
<li>In order to do anything, we need a cross compiler. This is a set of tools that runs on our development environment (Fedora 19 x86_64), but emit instructions for our target hardware (STM32 ARM). Besides the C/C++ compiler, this also includes things like the assembler and linker. Part of the cross-compile toolchain also includes a minimal libc-like environment to program in, which gives you access to <stdio.h> and other familiar header files and functions. We will build a cross compile environment from <a href="https://www.gnu.org/software/binutils/">binutils</a>, <a href="http://gcc.gnu.org/">gcc</a>, and <a href="https://www.sourceware.org/newlib/">newlib</a>.</li>
<li>Once we have a cross-compiler, we need some way to debug the programs we write. The simplest thing to do here is to build <a href="https://www.gnu.org/software/gdb/">gdb</a>, the GNU debugger. Unfortunately we can't just use the system gdb, as that generally only understands how to debug and disassemble code on your development machine architecture. So we'll build our own version of gdb that understands ARM.</li>
<li>With the debugger finished, we need some way to take the compiled version of our code and put it onto the target device. The STM32 devices use something called STLinkV2, which is a multi-purpose communication protocol (generally over USB). In order to upload our code to the device, we need a piece of software the speaks this protocol. Luckily there is <a href="http://openocd.sourceforge.net/">OpenOCD</a>, the Swiss Army Knife of communication protocols. We'll need to build a version of this that runs on our development machine, but knows how to speak STLinkV2. In this step we'll also build a configuration file that can communicate over STLinkV2.</li>
<li>With the communications taken care of, we need a device library. This is basically an abstraction layer that allows us to talk directly to the hardware on the target device. For the purposes of these posts we are going to use <a href="http://libopencm3.org/wiki/Main_Page">libopencm3</a>. This step will build libopencm3 for the target device.</li>
<li>Once we have libopencm3 built, we have to know how to link programs so that they run on the STM32. This step will discuss linker scripts and command-line directives necessary to build programs that run on the STM32.</li>
<li>Here we build our first simple program, upload it to the STM32, and watch it run! Finally!</li>
<li>For bonus, I discuss running a simple Real-Time Operating System on the STM32, <a href="http://www.freertos.org/">FreeRTOS</a>. Using this will allow you to define several tasks and have the RTOS switch between them, much like tasks on a full-fledged OS. This opens up new possibilities and new problems, some of which will be discussed.
</ol>
Whew, that's a lot of steps just to get the equivalent of "Hello World" running on the board. However, it should be educational and collect a lot of this information together in one place.
Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-53168311903554649942014-01-03T14:43:00.002-05:002014-01-03T14:43:41.038-05:00Oz 0.12.0 releaseI'm pleased to announce release 0.12.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user. Release 0.12.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.11.0 and 0.12.0 are:
<ul>
<li>Fixes to concurrent oz-install invocations</li>
<li>Python 3 compatibility in the test suites</li>
<li>Support for Ubuntu 12.04.3</li>
<li>Support for Mageia</li>
<li>Allow a MAC address to be passed in (instead of auto-generated)</li>
<li>Support for RHEL5.10</li>
<li>Support for Ubuntu 13.10</li>
<li>Use lxml instead of libxml2 for XML document processing (it has much better error messages)</li>
<li>Remove the unused "tunnels" functionality</li>
<li>Support FreeBSD 10.0</li>
<li>Remove deprecated functions from the Guest class</li>
<li>Speed up guest customization on guests that support NetworkManager</li>
<li>Follow subprocess commands as they are executed (makes debugging easier)</li>
<li>Ensure that any paths from the user are absolute, otherwise things don't work properly</li>
<li>Add support for OpenSUSE 13.1</li>
<li>Add support for Fedora 20</li>
<li>Add support for RHEL-7</li>
</ul>
A tarball and zipfile of this release is available on the Github releases page: <a href="https://github.com/clalancette/oz/releases">https://github.com/clalancette/oz/releases</a>. Packages for Fedora-19, Fedora-20, and EPEL-6 have been built in Koji and will eventually make their way to stable. Instructions on how to get and use Oz are available at <a href="http://github.com/clalancette/oz/wiki">http://github.com/clalancette/oz/wiki</a>.
If you have questions or comments about Oz, please feel free to contact me at clalancette at gmail.com, or open up an issue on the github page: <a href="http://github.com/clalancette/oz/issues">http://github.com/clalancette/oz/issues</a>.
Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-30815479496205996322013-12-15T16:22:00.003-05:002013-12-15T16:22:58.477-05:00Release of ruby-libvirt 0.5.1I'm pleased to announce the release of ruby-libvirt 0.5.1. ruby-libvirt is a ruby wrapper around the libvirt API.
The changelog between 0.5.0 and 0.5.1 is:
<ul>
<li>Fixes to compile against older libvirt</li>
<li>Fixes to compile against ruby 1.8</li>
</ul>
Version 0.5.1 is available from <a href="http://libvirt.org/ruby">http://libvirt.org/ruby</a>:
Tarball: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.1.tgz">http://libvirt.org/ruby/download/ruby-libvirt-0.5.1.tgz</a>
Gem: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.1.gem">http://libvirt.org/ruby/download/ruby-libvirt-0.5.1.gem</a>
It is also available from rubygems.org; to get the latest version, run:
$ gem install ruby-libvirt
As usual, if you run into questions, problems, or bugs, please feel free to
mail me ([email protected]) and/or the libvirt mailing list.
Thanks to everyone who contributed patches and submitted bugs.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-32606335910503110702013-12-09T20:52:00.001-05:002013-12-09T20:52:43.174-05:00Release of ruby-libvirt 0.5.0I'm pleased to announce the release of ruby-libvirt 0.5.0. ruby-libvirt is a ruby wrapper around the libvirt API. Version 0.5.0 brings new APIs, more documentation, and bugfixes:
<ul>
<li>Updated Network class, implementing almost all libvirt APIs</li>
<li>Updated Domain class, implementing almost all libvirt APIs</li>
<li>Updated Connection class, implementing almost all libvirt APIs</li>
<li>Updated DomainSnapshot class, implementing almost all libvirt APIs</li>
<li>Updated NodeDevice class, implementing almost all libvirt APIs</li>
<li>Updated Storage class, implementing almost all libvirt APIs</li>
<li>Add constants for almost all libvirt defines</li>
<li>Improved performance in the library by using alloca</li>
</ul>
Version 0.5.0 is available from <a href="http://libvirt.org/ruby">http://libvirt.org/ruby</a>:
Tarball: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.tgz">http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.tgz</a>
Gem: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.gem">http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.gem</a>
It is also available from rubygems.org; to get the latest version, run:
$ gem install ruby-libvirt
As usual, if you run into questions, problems, or bugs, please feel free to
mail me ([email protected]) and/or the libvirt mailing list.
Thanks to everyone who contributed patches and submitted bugs.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-38279434544624132602013-11-09T09:30:00.000-05:002014-01-27T09:48:11.048-05:00Writing Ruby Extensions in C - Part 13, Wrapping C data structuresThis is the thirteenth in my series of posts about writing ruby extensions in C. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-1.html">first</a> post talked about the basic structure of a project, including how to set up building. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-2.html">second</a> post talked about generating documentation. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-3.html">third</a> post talked about initializing the module and setting up classes. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-4.html">fourth</a> post talked about types and return values. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-5.html">fifth</a> post focused on creating and handling exceptions. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-6.html">sixth</a> post talked about ruby catch and throw blocks. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-7.html">seventh</a> post talk about dealing with numbers. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-8.html">eighth</a> post talked about strings. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-9.html">ninth</a> post focused on arrays. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-10.html">tenth</a> post looked at hashes. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-11.html">eleventh</a> post explored blocks and callbacks. The <a href="http://clalance.blogspot.com/2011/01/writing-ruby-extensions-in-c-part-12.html">twelfth</a> post looked at allocating and freeing memory. This post will focus on wrapping C data structures in ruby objects.<br />
<br />
<h2>Wrapping C data structures</h2>
When developing a ruby extension in C, it may be necessary to save an allocated C structure inside a Ruby object. For instance, in the ruby-libvirt bindings, a virConnectPtr (which points to a libvirt connection object) is saved inside of Libvirt::Connect ruby object, and that pointer is fetched from the object any time an instance method is called. Note that the pointer to the C structure is stored inside the Ruby object in a way that the ruby code can't get to; only C extensions will have access to this pointer.
There are only 3 APIs that are used to manipulate these pointers:
<ul>
<li>Data_Wrap_Struct(VALUE klass, void (*mark)(), void (*free)(), void *ptr) - Wrap the C data structure in ptr into a class of type klass. The free argument is a function pointer to a function that will be called when the object is being garbage collected. If the C structure references other ruby objects, then the mark function pointer must also be provided and must properly mark the other objects with rb_gc_mark(). This function returns a VALUE which is an object of type klass.</li>
<li>Data_Make_Struct(VALUE klass, c-type, void (*mark)(), void (*free)(), c-type *ptr) - Similar to Data_Wrap_Struct(), but first allocates and then wraps the C structure in an object. The klass, mark, free, and ptr arguments have the same meaning as Data_Wrap_Struct(). The c-type argument is the actual name of the type that needs to be allocated (sizeof(type) will be used to allocate).</li>
<li>Data_Get_Struct(VALUE obj, c-type, c-type *ptr) - Get the C data structure of c-type out of the object obj, and put the result in ptr. Note that this pointer assignment works because this is a macro.</li>
</ul>
<br />
An example will demonstrate the use of these functions:
<pre><code>
1) static VALUE m_example;
2) static VALUE c_conn;
3)
4) struct mystruct {
5) int a;
6) int b;
7) };
8)
9) static void mystruct_free(void *s)
10) {
11) xfree(s);
12) }
13)
14) static VALUE example_open(VALUE m)
15) {
16) struct mystruct *conn;
17) conn = ALLOC(struct mystruct);
18) conn->a = 25;
19) conn->b = 99;
20) return Data_Wrap_Struct(c_conn, NULL, mystruct_free, conn);
21) }
22)
23) static VALUE conn_get_a(VALUE c)
24) {
25) struct mystruct *conn;
26) Data_Get_Struct(c, struct mystruct, conn);
27) return INT2NUM(conn->a);
28) }
29)
30) void Init_example(void)
31) {
32) m_example = rb_define_module("Example");
33) rb_define_module_function(m_example, "open", example_open, 0);
34) c_conn = rb_define_class_under(m_example, "Conn", rb_cObject);
35) rb_define_method(c_conn, "get_a", conn_get_a, 0);
36) }
</code></pre>
<br />
On lines 32 and 33, we define the Example module and give it a module function called "open". Lines 34 and 35 define a Conn class under the Example module, and gives the Conn class a "get_a" method. Lines 14 through 21 are where we implement the Example::open function. There, we allocate memory for our C structure, then use Data_Wrap_Struct() to wrap that C structure in a ruby object of type Example::Conn. Note that we also pass mystruct_free() as the free callback; when the object gets reaped by the garbage collector, this function on lines 9 through 12 will be called to free up any memory. Now when the user calls "get_a" on the Example::Conn ruby object, the function on lines 23 through 27 will be called. There we use Data_Get_Struct() to fetch the structure back out of the object, and then return a ruby number for the integer stored inside.
Update: added links to all of the previous articles.
Update Jan 27, 2014: Updated the example to fix the use of ALLOC(). Thanks to Thomas Thomassen in the comments.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]7tag:blogger.com,1999:blog-8882389959394402329.post-12288247445171569832013-07-28T21:43:00.000-04:002013-07-28T21:43:09.201-04:00Oz 0.11.0 releaseI'm pleased to announce release 0.11.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user. Release 0.11.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.10.0 and 0.11.0 are:
<ul>
<li>Add support for installing Ubuntu 13.04</li>
<li>Add the ability to get user-specific ICICLE information</li>
<li>Add the ability to generate ICICLE safely, by using a disk snapshot</li>
<li>Add the ability to include extra files and directories on the installation ISO</li>
<li>Add the ability to install to alternate file types, like qcow2, etc.</li>
<li>Add support for installing Ubuntu 5.04/5.10</li>
<li>Add support for installing Fedora 19</li>
<li>Add support for installing Debian 7</li>
<li>Add support for Windows 2012 and 8</li>
<li>Add support for getting files over http for the commands/files section of the TDL</li>
<li>Add support for setting a custom MAC address to guests during installation</li>
<li>Add support for user specified disk and NIC model</li>
<li>Add support for OpenSUSE 12.3</li>
<li>Add support for URL based installs for Ubuntu</li>
</ul>
A tarball and zipfile of this release is available on the Github releases page: <a href="https://github.com/clalancette/oz/releases">https://github.com/clalancette/oz/releases</a>. Packages for Fedora-18 and Fedora-19 have been built in Koji and will eventually make their way to stable. Instructions on how to get and use Oz are available at <a href="http://github.com/clalancette/oz/wiki">http://github.com/clalancette/oz/wiki</a>.
<br><br>
If you have questions or comments about Oz, please feel free to contact me at clalancette at gmail.com, or open up an issue on the github page: <a href="http://github.com/clalancette/oz/issues">http://github.com/clalancette/oz/issues</a>.
<br><br>
This was one of the most active Oz releases ever, because of the feedback and patches from the community. Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-75209726950408017632013-03-09T12:10:00.000-05:002013-03-09T12:10:48.628-05:00Oz 0.10.0 releaseI'm pleased to announce release 0.10.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user. Release 0.10.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.9.0 and 0.10.0 are:
<ul>
<li>Support for installing OpenSUSE 12.1 and 12.2</li>
<li>Support for python3</li>
<li>Support for Ubuntu 12.04.1, 12.04.2, and 12.10</li>
<li>Fix up <command> ordering so that commands are run in the order they are specified in the XML</li>
<li>Updates and fixes to the documentation</li>
<li>Increase the shutdown timeout to support slower qemu guests</li>
<li>Add a default screenshot directory as /var/lib/oz/screenshots</li>
<li>Support for RHEL 5.9</li>
<li>Support for Fedora 18</li>
<li>Switch over to pycurl for header information; this allows http authentication to work</li>
<li>Switch to using the libvirt built-in screenshot mechanism. This removes the gvnc dependency and makes screenshots more reliable, but requires libvirt 0.9.7 or newer</li>
<li>Delete auto-generated ssh keys after customization</li>
</ul>
A tarball of this release is available, as well as packages for Fedora-17. Instructions on how to get and use Oz are available at <a href="http://github.com/clalancette/oz/wiki">http://github.com/clalancette/oz/wiki</a> .
If you have questions or comments about Oz, please feel free to contact me at [email protected], or open up an issue on the github page: <a href="http://github.com/clalancette/oz/issues">http://github.com/clalancette/oz/issues</a> .
Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-76719345367416158502012-08-18T23:49:00.000-04:002012-08-18T23:49:05.573-04:00Oz 0.9.0 releaseI'm pleased to announce release 0.9.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user. Release 0.9.0 is a (long overdue) bugfix and feature release for Oz. Some of the highlights between Oz 0.8.0 and 0.9.0 are:<br />
<ul>
<li>Easier to create Debian/Ubuntu packages</li>
<li>Ability to specify the disk size in the TDL</li>
<li>Ability to specify the number of CPUs and amount of memory used for the installation VM</li>
<li>Cleanup and bugfixes to oz-cleanup-cache</li>
<li>Ability to install Fedora-17 disk images</li>
<li>Ability to install guests as a non-root user. This has several caveats; please see the documentation on <a href="http://github.com/clalancette/oz">http://github.com/clalancette/oz</a> for more information</li>
<li>Ability to install RHEL-6.3 disk images</li>
<li>Ability to install ScientificLinuxCERN disk images</li>
<li>Ability to install Mandrake 8.2 disk images</li>
<li>Ability to install OpenSUSE 10.3 disk images</li>
<li>Ability to install Ubuntu 12.04 disk images</li>
</ul>
A tarball of this release is available, as well as packages for Fedora-16. Instructions on how to get and use Oz are available at <a href="http://github.com/clalancette/oz">http://github.com/clalancette/oz</a> .<br /><br />If you have any questions or comments about Oz, please feel free to contact [email protected] or me ([email protected]) directly.<br /><br />Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]7tag:blogger.com,1999:blog-8882389959394402329.post-64105330930472442042012-01-24T22:14:00.000-05:002012-01-24T22:18:50.217-05:00Oz 0.8.0 released(this is a little delayed; sorry about that)<br /><br />I'm pleased to announce release 0.8.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user.<br /><br />Release 0.8.0 is a (long overdue) bugfix and feature release for Oz. Some of the highlights between Oz 0.7.0 and 0.8.0 are:<br /><ul><br /><li>Optional virtualenv make target</li><br /><li>Conversion of unittests to py.test</li><br /><li>Replace mkisofs with genisoimage</li><br /><li>Debian package</li><br /><li>Ability to change the root password for Debian installs</li><br /><li>Add unittests for ozutil</li><br /><li>Add some unittests for the Guest object</li><br /><li>SSH tunnel (with SSL vhost) support for local repositories (mostly useful for imagefactory)</li><br /><li>Add a new manpage for oz-examples</li><br /><li>Make the output filename configurable with a command-line option to oz-install</li><br /><li>Monitor both network and disk activity when looking for guest activity</li><br /><li>Support for installing Ubuntu 11.10</li><br /><li>Support for SSL certificates for repositories</li><br /><li>Support for an optional version in the TDL</li><br /><li>Support for installling Mandrake 9.1, 9.2, 10.0, 10.1, 10.2</li><br /><li>Support for installing Mandriva 2006.0, 2007.0, 2008.0</li><br /><li>Support for Ubuntu customization</li><br /><li>Support for installing RHEL 6.2</li><br /></ul><br />A tarball of this release is available, as well as packages for Fedora-15. Instructions on how to get and use Oz are available at http://aeolusproject.org/oz.html<br /><br />If you have any questions or comments about Oz, please feel free to contact [email protected] or me ([email protected]) directly.<br /><br />Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-31720359520767324382011-10-04T16:44:00.000-04:002011-10-04T16:54:04.634-04:00Git bash prompts and tab completionSomeone recently asked me about my nifty bash command-prompt with git branch names. If I'm not in a git directory, then the bash prompt looks normal:<br /><pre><br />[clalance@localhost ~]$ <br /></pre><br />However, as soon as I cd into any directory that is a git repository, my prompt changes:<br /><pre><br />[clalance@localhost oz (master)]$ <br /></pre><br />If I'm in the middle of a rebase, my prompt looks like:<br /><pre><br />[clalance@localhost oz (master|REBASE-i)]$ <br /></pre><br />There are many other prompts, but that just gives you a taste of what you get. All of this goodness is due to the git-completion file that is shipped along with the git sources. The canonical place for git-completion.sh is actually the upstream git sources; you can see it here: <a href='http://repo.or.cz/w/git.git/blob/HEAD:/contrib/completion/git-completion.bash'>http://repo.or.cz/w/git.git/blob/HEAD:/contrib/completion/git-completion.bash</a>. Basically, you download that file, put it somewhere in your home directory (mine is at ~/.git-completion.sh), source it from your .bashrc, and then modify your PS1 to call the appropriate function. The end of my .bashrc looks like:<br /><pre><br />source ~/.git-completion.sh<br />export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '<br /></pre><br />The additional benefit that you get from sourcing .git-completion.sh is that you get branch auto-completion, which is also a very useful feature.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]2tag:blogger.com,1999:blog-8882389959394402329.post-57178524773931935342011-09-23T10:56:00.000-04:002011-09-23T11:03:31.521-04:00RPM dependency treesRecently I wondered what the dependency tree for <a href='http://aeolusproject.org'>Aeolus</a> looked like in Fedora. I knew we had a whole host of dependencies, but I thought it would be instructive to see it visually.<br /><br />This has been mentioned in other <a href='http://raftaman.net/?p=905'>blog posts</a> in the past, but the basic procedure to do this on Fedora is:<br /><pre><br /># yum install rpmorphan graphviz<br />$ rpmdep -dot aeolus.dot aeolus-all<br />$ dot -Tsvg aeolus.dot -o aeolus.svg<br /></pre><br />The rpmorphan provides the rpmdep binary. The rpmdep binary is a perl script that runs through the RPM dependency information, outputting one digraph node per-line. Then we use dot (part of the graphviz package) to take that digraph information and generate an image out of it. In the above example I made it generate an SVG, but you can have it output PNG, JPEG, PDF, etc. The full list of what dot can do is here: <a href='http://www.graphviz.org/doc/info/output.html'>http://www.graphviz.org/doc/info/output.html</a>Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]1tag:blogger.com,1999:blog-8882389959394402329.post-87072626679103829662011-09-15T09:28:00.000-04:002011-09-15T09:30:17.196-04:00Oz 0.7.0 releaseI'm pleased to announce release 0.7.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user.<br /><br />Release 0.7.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.6.0 and 0.7.0 are:<br /><ul><br /> <li>Ability to use the "direct initrd injection" method to install Fedora/RHEL guests. This is an internal implementation detail, but can significantly speed up installs for Fedora or RHEL guests. (thanks for the tip from Kashyap Chamarthy)</li><br /> <li>Support for Fedora-16 (thanks to Steve Dake for help in making this work)</li><br /> <li>Use the serial port to announce guest boot, rather than a network port. This makes it so we no longer have to manipulate iptables, and gets us one step closer to having Oz run as non-root</li><br /> <li>(for developers) Re-written unittests in python for speedier execution</li><br /> <li>(for developers) Additional methods in the TDL class to merge in external package lists (thanks to Ian McLeod)</li><br /></ul><br />A tarball of this release is available, as well as packages for Fedora-14, Fedora-15, and RHEL-6. Note that to install the RHEL-6 packages, you must be running RHEL-6.1 or later. Instructions on how to get and use Oz are available at <a href="http://aeolusproject.org/oz.html">http://aeolusproject.org/oz.html</a><br /><br />If you have any questions or comments about Oz, please feel free to contact [email protected] or me ([email protected]) directly.<br /><br />Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-75392139297247250662011-09-08T10:41:00.000-04:002011-09-08T10:45:45.292-04:00New required kickstart line in Fedora 16Just a quick note for anyone looking at Fedora-16. From Fedora-16 forward, you need a new line in your kickstart that looks like:<br /><pre>part biosboot --fstype=biosboot --size=1</pre><br />I'm honestly not sure what this is exactly needed for, but unattended kickstart installs will not start without it. There is a bit more information at <a href='https://fedoraproject.org/wiki/Anaconda/Kickstart'>https://fedoraproject.org/wiki/Anaconda/Kickstart</a>Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]3tag:blogger.com,1999:blog-8882389959394402329.post-60018759042973071252011-09-06T16:38:00.000-04:002011-09-06T17:05:11.173-04:00Services and systemdI spent some time last week poking around systemd and trying to figure out how certain things work. I can't claim to be an expert yet, but I did uncover some things that I found to be very useful.<br /><br />If you try to start a service on a machine with systemd (Fedora-15, for instance), it actually looks different then with a traditional SysV style init.<br /><br />SysV:<pre><br />[root@localhost ~]# service mongod start<br />Starting mongod: [ OK ]<br />[root@localhost ~]# service mongod stop<br />Stopping mongod: [ OK ]<br />[root@localhost ~]# /etc/init.d/mongod start<br />Starting mongod: [ OK ]<br />[root@localhost ~]# /etc/init.d/mongod stop<br />Stopping mongod: [ OK ]<br />[root@localhost ~]# <br /></pre><br />Systemd:<pre><br />[root@localhost ~]# service mongod start<br />Starting mongod (via systemctl): [ OK ]<br />[root@localhost ~]# service mongod stop<br />Stopping mongod (via systemctl): [ OK ]<br />[root@localhost ~]# /etc/init.d/mongod start<br />Starting mongod (via systemctl): [ OK ]<br />[root@localhost ~]# /etc/init.d/mongod stop<br />Stopping mongod (via systemctl): [ OK ]<br />[root@localhost ~]# <br /></pre><br />"(via systemctl)" is a small but important change to how services are launched. With SysV-style scripts, the scripts are executed more-or-less directly from the bash shell they are launched from (the "service" binary does a little more in terms of cleaning up the environment, but it still ends up exec'ing the script in the end).<br /><br />With systemd this all changes. One of the first things nearly all initscripts do is to source /etc/init.d/functions. On a systemd-enabled system, the very first thing that /etc/init.d/functions does is to execute systemctl and then exit (ignoring the rest of the initscript). What systemctl does is to put a message on dbus asking for the service you specified to be started. systemd itself is listening on dbus; when it sees a message like this, it picks up the message and proceeds to act on it. It first looks to see if there is a native systemd unit file for this service; if there is, it starts the service according to the native unit file and returns status to systemctl, which returns status to service. If there is no native systemd unit file, it then looks in /etc/init.d for a legacy script. If one is found, then it fork and exec's that script, and returns the status to systemctl, which returns to service.<br /><br />This leads to one of the most visible issues with systemd, in that there is no output if the initscript fails. That is, if you are using a legacy style initscript and you are used to certain output being shown when something fails, it may not be shown anymore since that output was consumed by systemd itself and not returned to systemctl.<br /><br />One way to deal with this is to skip the redirect from service to systemctl. There are different ways to do this depending whether you are using the "service" binary or if you are executing the script directly. If you are using the service binary, it understands a new flag to skip redirection to systemd:<pre>service --skip-redirect foo start</pre>If you are directly executing the initscript, you need to pass an environment variable: <pre>SYSTEMCTL_SKIP_REDIRECT=1 /etc/init.d/foo start</pre>Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]3tag:blogger.com,1999:blog-8882389959394402329.post-71928267824305919612011-08-22T13:01:00.000-04:002011-08-22T13:03:31.942-04:00Oz 0.6.0 releaseI'm pleased to announce release 0.6.0 of Oz. Oz is a program for doing automated installation of guest operating systems with limited input from the user.
<br />
<br />Release 0.6.0 is a bugfix and feature release for Oz. Some of the highlights between Oz 0.5.0 and 0.6.0 are:
<br /><ul>
<br /><li>The ability to specify the destination for the ICICLE output from oz-install
<br />and oz-generate-icicle</li>
<br /><li>pydoc class documentation for all internal Oz classes</li>
<br /><li>Automatic detection of KVM or QEMU at runtime (this allows oz to be used within virtual machines, although with a large performance hit)</li>
<br /><li>Less scary warning messages in the debug output</li>
<br /><li>Printing of the screenshot path when a build fails</li>
<br /><li>Ability to run multiple Oz installs of the same OS at the same time</li>
<br /><li>Support for OEL and ScientificLinux</li>
<br /><li>Support for RHEL-5.7</li>
<br /><li>Support for CentOS 6</li>
<br /><li>Support for OpenSUSE arbitrary file injection and command execution</li>
<br /><li>Ability to make the TDL (template) parsing enforce a root password</li>
<br /><li>Rejection of localhost URLs for repositories (since they must be reachable from the guest operating system, localhost URLs make no sense)</li>
<br /></ul>
<br />Fedora-14, Fedora-15, and RHEL-6 packages are available for this release. Note that to install the RHEL-6 packages, you must be running RHEL-6.1 or later. Instructions on how to get and use Oz are available at <a href="http://aeolusproject.org/oz.html">http://aeolusproject.org/oz.html</a>
<br />
<br />If you have any questions or comments about Oz, please feel free to contact [email protected] or me ([email protected]) directly.
<br />
<br />Thanks to everyone who contributed to this release through bug reports, patches, and suggestions for improvement.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]0tag:blogger.com,1999:blog-8882389959394402329.post-48063245112499467802011-07-29T08:58:00.000-04:002011-07-29T09:02:26.872-04:00Release of ruby-libvirt 0.4.0I'm pleased to announce the release of ruby-libvirt 0.4.0. ruby-libvirt is a ruby wrapper around the <a href="http://libvirt.org">libvirt</a> API. Version 0.4.0 brings new APIs, more documentation, and bugfixes:<br /> <ul><br /> <li>Updated Domain class, implementing dom.memory_parameters=,<br /> dom.memory_parameters, dom.updated?, dom.migrate2,<br /> dom.migrate_to_uri2, dom.migrate_set_max_speed,<br /> dom.qemu_monitor_command, dom.blkio_parameters,<br /> dom.blkio_parameters=, dom.state, dom.open_console, dom.screenshot,<br /> and dom.inject_nmi</li><br /> <li>Implementation of the Stream class, which covers the<br /> libvirt virStream APIs</li><br /> <li>Add the ability to build against non-system libvirt libraries</li><br /> <li>Updated Error object, which now includes the libvirt<br /> code, component and level of the error, as well as all of<br /> the error constants from libvirt.h</li><br /> <li>Updated Connect class, implementing conn.sys_info, conn.stream,<br /> conn.interface_change_begin, conn.interface_change_commit, and<br /> conn.interface_change_rollback</li><br /> <li>Updated StorageVol class, implementing vol.download and vol.upload</li><br /> <li>Various bugfixes</li><br /> </ul><br />Version 0.4.0 is available from <a href="http://libvirt.org/ruby">http://libvirt.org/ruby</a>:<br /><br />Tarball: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.4.0.tgz">http://libvirt.org/ruby/download/ruby-libvirt-0.4.0.tgz</a><br />Gem: <a href="http://libvirt.org/ruby/download/ruby-libvirt-0.4.0.gem">http://libvirt.org/ruby/download/ruby-libvirt-0.4.0.gem</a><br /><br />It is also available from rubygems.org; to get the latest version, run:<br /><br />$ gem install ruby-libvirt<br /><br />As usual, if you run into questions, problems, or bugs, please feel free to<br />mail me ([email protected]) and/or the libvirt mailing list.Chrishttp://www.blogger.com/profile/12412311785503355784[email protected]6