SVG Wow! http://svg-wow.org/blog Mon, 03 Mar 2014 21:47:48 GMT Blogofile hourly 1 Text Effects http://svg-wow.org/blog/2010/09/25/text-effects Sat, 25 Sep 2010 15:22:00 EDT <![CDATA[yui]]> <![CDATA[text]]> <![CDATA[animation]]> <![CDATA[interactive]]> http://svg-wow.org/blog/2010/09/25/text-effects Text Effects <![CDATA[

Running the demo

Click the 'start svg demo' button on this page. When the page is loaded, you can click anywhere on the canvas to start the demo.

Play time!

I had never got around to play with SVG text as I wanted to and it finally happened with this demo. The demo shows how to access the content of SVG <text> elements and animate individual glyphs.

The demo has a library of 'effects' which are succesively applied to quotes (from http://en.wikiquote.org/wiki/American_proverbs).

Breaking text into spans

All the effects work similarly. When applied, they first break down the text into a set of text spans which are then animated individually. The target text element (or elements, since the effect can apply to multiple lines of text) is (are) left unmodified and their display property is set to 'none'. The text spans are inserted after the text elemenet and are the target of the different animation effects. The key DOM API method used in the process of computing individual glyph positions is getStartPositionOfChar on the SVGTextContentElement interface (see the specification).

// Compute the different glyph positions and how many characters map to the
// same glyph position (may happen with ligatures).
var p = 0,
    pos = [],
    curPos,
    n = .. /* textContent length */
for (i = 0; i < n; i++) {
    p = text.getStartPositionOfChar(i);
    if (curPos === null ||
        p.x !== curPos.p.x || p.y !== curPos.p.y) {
        curPos = {
            p: {x: p.x + dx, y: p.y + dy},
            n: 1
        };
        pos.push(curPos);
    } else {
        // Same position, just increase the text run length
        curPos.n++;
    }
}

Effects and animations

As in several other demos on this web site, the animations are built by extending the YUI 3 JavaScript and CSS library to support SVG. The code also uses a concept of Effect to encapsulate a number of related animations so that they can be manipulated as a single abstraction. For example, the 'opacity-swipe' effect has one animation on the 'fill-opacity' of each glyph, each animation starts when the previous one ends and the user can run the effect which internally starts the first animation, causing the whole animation chain to run. Likewise, the user can use the onBegin and onEnd events on the effect object to invoke specific code when the effect starts or ends.

The following code snippet shows how the 'opacity-swipe' effect applies to the different text spans.

// ...

// toGlyphSpans is a utility method to break down the text in multiple
// elements.
var spans = toGlyphSpans(text),
    n = spans.length,
    s,
    a,
    anims = [];

// duration, offset and easing a defined somewhere else in the code.s
for (var i = 0; i < n; i++) {
    s = spans[i];
    a = new Animate({
        node: s,
        from: {'fill-opacity': 0},
        to: {'fill-opacity': 1},
        duration: duration,
        easing: easing
    });
    anims.push(a);
}

for (i = 1; i < n; i++) {
     anims[0].onBegin(anims[i], i * offset);
}

// The Effect utility provides a way to manipulate a group of animations
// as a single entity.
return new Effect(anims);

Music and Font

Music by Roger Kidd (with all my thanks).

The font used in this demo is "GoodDog" and can be found on Font Squirrel. See the attributions page.

]]>
picture-shuffle http://svg-wow.org/blog/2010/09/18/picture-shuffle Sat, 18 Sep 2010 18:00:00 EDT <![CDATA[css3]]> <![CDATA[yui]]> <![CDATA[canvas]]> <![CDATA[animation]]> <![CDATA[html5]]> http://svg-wow.org/blog/2010/09/18/picture-shuffle picture-shuffle <![CDATA[

Running the demo

Click the 'start svg demo' button on this page. When the page is loaded, you can click on one of the picture stacks at the bottom. The stack will be dispatched to form a full picture. You can click anywhere on the image to fold it back into a stack.

Creating a polaroid collage from picture cut-outs

The idea for this demo came from my son Romain who came back from an Adobe Photoshop course he was taking. Romain showed me a polaroid collage effect he had created. The effect was quite nice, similar to some you can find on the web if you look for Photoshop polaroid effect. Of course, being an SVG geek, I thought about how this could be done in SVG, and I started hacking away a few days before the SVG Open 2010 conference where this demo was first shown.

I also decided that I wanted the demo to go a little beyond what people do in photoshop, where the content of the Polaroid collage is perfectly aligned, which helps readability but is a bit unrealistic. I wanted the individual images to be slightly missaligned, as they would be if you had a real stack of Polaroids that you were assembling into a collage.

From idea to implementation

For this to be easier than in Photoshop, the demo had to let the code just point to an image and then automatically slice it up in individual Polaroids that would be assembled into the collage.

I first used a full SVG solution, using the <pattern> element. It did the job functionally. Unfortunately, most implementations (except Safari), could not handle the number of <pattern> elements that had to be created. The performance problem got worse when a drop shadow effect got added to the individual Polaroids and in some cases (i.e., browsers which do not support SVG filter effects), the drop shadow was simply not an option.

So, I had to turn to something else.

Canvas to the rescue

The solution that I ended up implementing uses a mix of SVG and Canvas, and I think it is a great example of how the two technologies complement each other.

Canvas is used to create the Polaroids and render the right content from the original picture, border and drop shadow as illustrated bellow.

desired result

Figure 1: desired result

one polaroid

Figure 2: one Polaroid

offscreen rendering

Figure 3: offscreen rendering: content, border and drop shadow.

// Canvas where the image to slice up is rendered once.
var imgCanvas = ...;

// Canvas an context where each Polaroid 'snapshot' is drawn.
var snapshotCanvas = ...,
    snapshotContext = snapshotCanvas.getContext('2d');

/**
 * Renders a snapshot of the image with the polaroidSize and centered on
 * (cx, cy) with a rotation of r degrees. Some random noise is added to these
 * values to create a more realistic rendering.
 *
 * @param cx the exact image center on the x-axis
 * @param cy the exact image center on the y-axis
 * @param r the exact rotation, for a perfect picture alignment
 */
function snapshot (cx, cy, r) {
    var p = 0.01; // alignmentRoughness
    var ar = (1 - p + 2 * p * Math.random()) * r,
        acx = (1 - p + 2 * p * Math.random()) * cx,
        acy = (1 - p + 2 * p * Math.random()) * cy;

    snapshotCtx.save();
    snapshotCtx.clearRect(0, 0, polaroidSize.width, polaroidSize.height);
    snapshotCtx.translate(polaroidSize.width / 2, polaroidSize.height /2);
    snapshotCtx.rotate(-ar * Math.PI / 180);
    snapshotCtx.translate(-acx, -acy);
    snapshotCtx.drawImage(imgCanvas, 0, 0);
    snapshotCtx.restore();
}

// Use further code to render the border and drop shadow.

Once each Polaroid is created, it is transfered to an SVG <image> element like so:

var canvas = ..; 

// Draw content, border and drop shadow into canvas
// ...

// Convert canvas content to a dataURL
var dataURL = canvas.toDataURL();

// Set the dataURL on an SVG &lt;image&gt;
var image = document.createElement(svgNS, 'image');
image.setAttributeNS(xlinkNS, 'href', dataURL);

Animating with YUI SVG Extensions

Like for many other demos on this web site, the animations are created with YUI SVG Extensions. There is an animation of the transform attribute for each <image> element. The same animation is used both to dispatch the image from the stack to its position and to fold it back into the stack. This is done by using the reverse animation property.

// Reverses the animation when it ends: change its 'direction' and
// also adjust its duration, depending on whether it is dispatching
// or folding back. The duration is in the [0.15, 0.30] seconds range
// for folding back and in the [0.25, 0.5] for dispatching.
function getReverseHandler (a) {
    return function () {
        a.set('reverse', !a.get('reverse'));

        if (a.get('reverse') === true) {
            a.set('duration', 0.15 + 0.15 * Math.random());
        } else {
            a.set('duration', 0.25 + 0.25 * Math.random());
        }
    };
}

// Iterate over all the images created from the canvas dataURLs
while (image !== null) {
    // ...
    anim = new sw.animation.Animate({
        node: image,
        from: {
            transform: {
                r: c.r + 90,
                tx: stackPosition.x + 10 - 20 * Math.random(),
                ty: stackPosition.y + 10 - 10 * Math.random()
            }
        },
        to: {
            transform: {r: c.r, tx: c.cx, ty: c.cy}
        },
        transformTemplate: "translate(#tx, #ty) rotate(#r) " +
                           "translate(" + (-c.cx) + "," + (-c.cy) + ")",
        duration: 0.25 + 0.25 * Math.random(),
        easing: Y.Easing.easeOutStrong
    });
    // ..
    anim.onEnd(getReverseHandler(anim));
    // ...
}

Note how the end handler also changes the animation's duration, so that folding back the pictures is faster than dispatching them. There is also some randomness used on the animation duration to give the effect a more realistic feel.

SVG and Canvas: Great complements

This demo showed how SVG and Canvas can be both used advantageously: Canvas is used to slice images and create pre-rendered Polaroids with their border and drop shadows while SVG is used to easily manage interactivity and animation on individual <image> elements. It is great that we have a way to move pixel data from Canvas to SVG with the dataURL. It would be even better if there was an API to directly move pixel data between Canvas and SVG without having to go through Base64 encoding which is wasteful in memory and performance.

Small use of <foreignObject>

The instructions on the demo area are displayed by embeding HTML content inside a <foreignObject> element.

]]>
Graffitis http://svg-wow.org/blog/2010/09/06/graffitis Mon, 06 Sep 2010 18:44:00 EDT <![CDATA[css3]]> <![CDATA[canvas]]> <![CDATA[yui]]> <![CDATA[animation]]> <![CDATA[html5]]> <![CDATA[interactive]]> http://svg-wow.org/blog/2010/09/06/graffitis Graffitis <![CDATA[

Running the demo

Click the 'start svg demo' on the page. When the page is loaded, it has the following header:

You can do several things (which are explained later on):

  • select one of the predefined graffitis following the 'On blackbook' label and see them rendered with SVG and Canvas. There are two predefined graffitis: one is 'draw' from an unknown contributor and the other one is 'katsu' from Katsu.
  • select the random link and see an random graffiti from the http://000000book.com/ (black book) graffiti repository.
  • select the 'record' button and click and drag in the black drawing area to draw strokes. To start a new stroke, release the mouse and the click and drag to draw the new stroke. When you are done recording the graffiti, you can select the 'play' button to see it rendered with SVG and Canvas
  • export the graffiti as SVG when clicking on the 'show SVG' button.
  • see the graffiti's GML source when clicking on the 'show GML' button.
  • show an alternate rendering of the graffiti using the 'overlay' filter by clicking on the 'show brick overlay' button.

The 'spray paint' check box turns the Canavs rendering on or off.

Graffitis and the Graffiti Markup Language (GML)

Graffitis go from the simplest scribbles to the most sophisticated artwork, as you can see on Flickr for example.

There is amazing work done to capture graffiti drawing and render them with computers graphics. Some renderings are absolutely stunning, as the work done by Chris Sugrue and Evan Roth shows.

The graffiti data is captured in the Graffiti Markup Language, GML. http://000000book.com/ (black book) is a repository of GML graffitis. Graffitis can be created with applications such as FatTagKatsu for the iPhone to draw tags and upload them to the Blackbook repository.

Using SVG and Canvas together

This demo uses and manipulates GML JSON data to render and animate tags in the blackbook tags repository. The graffiti data is turned into a number of SVG <path> objects and the 'drawing' effect is simply done by animating the stroke-dashoffset on the different path elements. The getTotalLength method on the SVGPathElement interface provides the length for each path and lets the code compute the proper stroke-dashoffset value. This is illustrated in the following figure.

The rendering also uses the Canvas API to draw particles of paint as the stroke gets rendered. The particles are sprayed based on the speed of the pen as it moves along the graffiti. The canvas with the particles is part of the SVG tree, as a child of a foreignObject element, which demonstrates how well the two rendering models can be combined.

The demonstration also shows how to simply create a GML file: when the user selects the 'record' button, he/she can the draw on the canvas and then select the 'play' button when done. This will render the rendering of the captured GML content.

An 'overlay' filter effect lurking in a corner

Finally, the demonstration illustrates an overlay filter effect. The overlay compositing rule does not come directly in SVG as it is not part of the <feBlend> modes. However, it is possible (even though a bit computation intensive) to create an overlay by combining the 'multiply' and 'screen' modes of <feBlend>. This is a bit of a sophisticated filter which I'll further describe in a separate demo decicated to just that filter.

An ugly hack also lurking

Several implementations (Firefox 3.6, Safari 5.0 and Chrome 6.0) have a bug at the time of this writing and fail to correctly render stroke line joins when the path points are closer than the stroke width. The code contains an ugly hack to get the visually correct result. The hack consists in adding circles at the point poisitions to make sure the line joins are properly rounded. This is ugly because it adds to the DOM load (many <circle> elements are added) and slows down the animation.

The page header uses the HVD-Peace font.

]]>
Camera http://svg-wow.org/blog/2010/08/14/camera Sat, 14 Aug 2010 18:00:00 EDT <![CDATA[interactivity]]> <![CDATA[camera]]> <![CDATA[animation]]> <![CDATA[yui]]> <![CDATA[audio]]> http://svg-wow.org/blog/2010/08/14/camera Camera <![CDATA[

Running the demo

Click on the "Start SVG Demo" at the top of this page. After the loading message disappears, click in the display area to start the demo.

Using the camera metaphore to script zoom and pan

Sometimes, we only think of animation as moving objects, changing their sizes, positions or other properties. This demo shows that we can also create animation effects by moving a 'camera' around.

The camera effect is created by animating the transform on a target <g> in the SVG content, which creates the illusion of a camera moving over the group of elements contained in the target group.

Camera utility

The demo uses a small utility camera.js which makes it easy to create this type of effect.

Following is a sample of how to instantiate, configure and start the camera with the action method.

var cameraTarget = Y.one('#cameraTarget'), // Uses YUI 3.0 for DOM access.
    Easing = Y.Easing;
var defaultInterpolators = {
    zoom: Easing.easeNone,
    direction: Easing.easeBothStrong ,
    position: Easing.easeBothStrong
};
var camera = new sw.tools.Camera({
    target: cameraTarget, // The element containing the content to navigate over
    viewport: {width: 800, height: 600}, // The camera's viewport
    frameLength: 10,  // Controls the camera's movement speed.
    position: {x: 243, y: 68}, // The camera's initial position
    direction: 90, // The camera angle
    zoom: 7 // The camera's zoom level.
    });

    camera.addTravelSegments([{
        name: "wherever",
        position: {x: 243, y: 384},
        direction: 90,
        zoom: 7,
        runLength: 1000,
        interpolators: defaultInterpolators
    }, {
        name: "you",
        position: {x: 440, y: 368},
        direction: 180,
        zoom: 1.6,
        runLength: 2000,
        interpolators: defaultInterpolators
    }, ... // More camera positions
});

camera.action();

The following figure illustrates how the zoom level, direction and position specify the successive positions of the camera over the composition.

Attributions

Many thanks to Roger Kidd for creating this demo's rythmic arrangment.

]]>