The Transformation URL API enables you to deliver media assets, including a large variety of on-the-fly transformations through the use of URL parameters. This reference provides comprehensive coverage of all available URL transformation parameters, including syntax, value details, and examples.
This reference covers the parameters and corresponding options and values that can be used in the <transformations> element of the URL. It also covers the <extension> element.
The transformation names and syntax shown in this reference refer to the URL API.
Depending on the Cloudinary SDK you use, the names and syntax for the same transformation may be different. Therefore, all of the transformation examples in this reference also include the code for generating the example delivery URL from your chosen SDK.
The SDKs additionally provide a variety of helper methods to simplify the building of the transformation URL as well as other built-in capabilities. You can find more information about these in the relevant SDK guides.
Action parameters: Parameters that perform a specific transformation on the asset.
Qualifier parameters: Parameters that do not perform an action on their own, but rather alter the default behavior or otherwise adjust the outcome of the corresponding action parameter.
See the Transformation Guide for additional guidelines and best practices regarding parameter types.
Tip
Visit the Transformation Center in your Cloudinary Console to explore and experiment with transformations across a variety of images and videos.
Although not a transformation parameter belonging to the <transformation> element of the URL, the extension of the URL can transform the format of the delivered asset, in the same way as f_<supported format>.
If f_<supported format> or f_<auto> are not specified in the URL, the format is determined by the extension. If no format or extension is specified, then the asset is delivered in its originally uploaded format.
If using an SDK to generate your URL, you can control the extension using the format parameter, or by adding the extension to the public ID.
If using a raw transformation, for example to define an eager or named transformation, you can specify the extension at the end of the transformation parameters, following a forward slash. For example, c_pad,h_300,w_300/jpg means that the delivery URL has transformation parameters of c_pad,h_300,w_300 and a .jpg extension. c_pad,h_300,w_300/ represents the same transformation parameters, but with no extension.
Note
As the extension is considered to be part of the transformation, be careful when defining eager transformations and transformations that are allowed when strict transformations are enabled, as the delivery URL must exactly match the transformation, including the extension.
Deliver the image as a PNG by using the SDK format parameter, which sets the extension of the URL. Note that in contrast to f (fetch format), this is not a transformation parameter, but rather an SDK parameter that controls the file extension of the public ID in the resulting URL.
Deliver the image in its originally uploaded format (no extension specified):
Rotates or flips an asset by the specified number of degrees or automatically according to its orientation or available metadata. Multiple modes can be applied by concatenating their values with a dot.
If either the width or height of an asset exceeds 3000 pixels, the asset is automatically downscaled first, and then rotated. This applies to the size of the asset that is the input to the rotation, whether that be the output of a previous chained transformation or the original asset size.
Rotates an image or video based on the specified mode.
Use with: To apply one of the a_auto modes, use it as a qualifier with a cropping action that adjusts the aspect ratio, as per the syntax details and example below.
In the following example, the image is rotated counter-clockwise (a_auto_left) because the original image was a landscape (aspect ratio greater than 1), while the requested aspect ratio is portrait (aspect ratio = 0.7).
If the requested aspect ratio had been 1.0 or larger, the same transformation would not result in rotation.
A qualifier that crops or resizes the asset to a new aspect ratio, for use with a crop/resize mode that determines how the asset is adjusted to the new dimensions.
Applies the specified background color on transparent background areas in an image.
Can also be used as a qualifier to override the default background color for padded cropping of images and videos, text overlays and generated waveform images.
A qualifier that automatically selects the background color based on one or more predominant colors in the image, for use with one of the padding crop mode transformations.
Pad an image to a width and height of 150 pixels, and with the background color set to the 2 most predominant colors from that image, blended in a diagonally descending direction (b_auto:predominant_gradient:2:diagonal_desc,c_pad,h_150,w_150):
Pad an image to a width and height of 150 pixels, with a 4 color gradient fade in the auto colored padding, and limiting the possible colors to red, green, blue, and orange (b_auto:predominant_gradient:4:palette_red_green_blue_orange,c_pad,h_150,w_150):
A qualifier that automatically fills the padded area using generative AI to extend the image seamlessly. Optionally include a prompt to guide the image generation.
Using different seeds, you can regenerate the image if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
Notes and limitations:
Generative fill can only be used on non-transparent images.
If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Establishes a baseline transformation from a named transformation. The baseline transformation is cached, so when re-used with other transformation parameters, the baseline part of the transformation does not have to be regenerated, saving processing time and cost.
Notes
You can combine the baseline transformation with other transformation parameters, but it must be the first component in the chain and the only transformation parameter in that component.
Add a 60-pixel wide border of a semi transparent (RGBA hex quadruplet) color - the last 2 digits are the hex value of the alpha channel (bo_60px_solid_rgb:00390b60):
Add multiple 200-pixel wide borders (blue, red and green) to an image, (bo_200px_solid_blue/bo_200px_solid_red/bo_200px_solid_green):
Controls the bitrate for audio or video files in bits per second. By default, a variable bitrate (VBR) is used, with this value indicating the maximum bitrate.
Supported for video codecs: h264, h265 (MPEG-4); vp8, vp9 (WebM)
Supported for audio codecs: aac, mp3, vorbis
Changes the size of the delivered asset according to the requested width & height dimensions.
Depending on the selected <crop mode>, parts of the original asset may be cropped out and/or the asset may be resized (scaled up or down).
When using any of the modes that can potentially crop parts of the asset, the selected gravity parameter controls which part of the original asset is kept in the resulting delivered file.
Automatically determines the best crop based on the gravity and specified dimensions.
If the requested dimensions are smaller than the best crop, the result is downscaled. If the requested dimensions are larger than the original image, the result is upscaled. Use this mode in conjunction with the g (gravity) parameter.
Tries to prevent a "bad crop" by first attempting to use the auto cropping mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.
You can also specify a specific region of the original image to keep by specifying x and y qualifiers together with w (width) and h (height) qualifiers to define an exact bounding box. When using this method, and no gravity is specified, the x and y coordinates are relative to the top-left (north-west) corner of the original asset. You can also use percentage based numbers instead of the exact coordinates for x, y, w and h (e.g., 0.5 for 50%). Use this method only when you already have the required absolute cropping coordinates. For example, you might use this if your application allows a user to upload user-generated content, and your application allows the user to manually select a region to crop from the original image, and you pass those coordinates to build the crop URL.
Creates an asset with the exact specified width and height without distorting the asset. This option first scales as much as needed to at least fill both of the specified dimensions. If the requested aspect ratio is different than the original, cropping will occur on the dimension that exceeds the requested size after scaling. You can specify which part of the original asset you want to keep if cropping occurs using the gravity (set to 'center' by default).
Tries to prevent a "bad crop" by first attempting to use the fill mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image, or if more content in specific frames in a video should be shown. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.
Scales the asset up or down so that it takes up as much space as possible within a bounding box defined by the specified dimension parameters without cropping any of it. The original aspect ratio is retained and all of the original image is visible.
Requires the Imagga Crop and Scale add-on. The Imagga Crop and Scale add-on can be used to smartly crop your images based on areas of interest within each specific photo as automatically calculated by the Imagga algorithm.
Requires the Imagga Crop and Scale add-on. The Imagga Crop and Scale add-on can be used to smartly scale your images based on automatically calculated areas of interest within each specific photo.
The lfill (limit fill) mode is the same as fill but only if the original image is larger than the specified resolution limits, in which case the image is scaled down to fill the specified width and height without distorting the image, and then the dimension that exceeds the request is cropped. If the original dimensions are smaller than the requested size, it is not resized at all. This prevents upscaling. You can specify which part of the original image you want to keep if cropping occurs using the gravity parameter (set to center by default).
Same as the fit mode but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original asset is visible. This mode doesn't scale up the asset if your requested dimensions are larger than the original image size.
The lpad (limit pad) mode is the same as pad but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. This mode doesn't scale up the asset if your requested dimensions are bigger than the original asset size. Instead, if the proportions of the original asset do not match the requested width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed by using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.
The mfit (minimum fit) mode is the same as fit but only if the original image is smaller than the specified minimum (width and height), in which case the image is scaled up so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original image is visible. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's.
The mpad (minimum pad) mode is the same as pad but only if the original image is smaller than the specified minimum (width and height), in which case the image is unchanged but padding is added to fill the specified dimensions. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's. You can also specify where the original image is placed by using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.
Request to pad with a green background, a 100-pixel wide image to a minimum width and height of 150 pixels while retaining the aspect ratio (b_green,c_mpad,h_150,w_150):
Resizes the asset to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. If the proportions of the original asset do not match the specified width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.
Resizes the asset exactly to the specified width and height. All original asset parts are visible, but might be stretched or shrunk if the dimensions you request have a different aspect ratio than the original.
If only width or only height is specified, then the asset is scaled to the new dimension while retaining the original aspect ratio (unless you also include the fl_ignore_aspect_ratio flag).
Creates image thumbnails based on a gravity position. Must always be accompanied by the g (gravity) parameter. This cropping mode generates a thumbnail of an image with the exact specified width and height dimensions and with the original proportions retained, but the resulting image might be scaled to fit in the specified dimensions. You can specify the z (zoom) parameter to determine how much to scale the resulting image within the specified width and height.
Controls the color space used for the delivered image or video.
If you don't include this parameter in your transformation, the color space of the original asset is generally retained. In some cases for videos, the color space is normalized for web delivery, unless cs_copy is specified.
Specifies a backup placeholder image to be delivered in the case that the actual requested delivery image or social media picture does not exist. Any requested transformations are applied on the placeholder image as well.
Controls the density to use when delivering an image or when converting a vector file such as a PDF or EPS document to a web image delivery format.
For web image formats: By default, if an image does not contain resolution information in its embedded metadata, Cloudinary normalizes any derived images for web optimization purposes and delivers them at 150 dpi. Controlling the dpi can be useful when generating a derived image intended for printing.
Tip
You can take advantage of the idn (initial density) value to automatically set the density of your image to the (pre-normalized) initial density of the original image (for example, dn_idn). This value is taken from the original image's metadata.
For vector files (PDF, EPS, etc.): When you deliver a vector file in a web image format, it is delivered by default at 150 dpi.
Delivers the image or video in the specified device pixel ratio.
Note
When setting a DPR value, you must also include a crop/resize transformation specifying a certain width or height.
Important
When delivering at a DPR value larger than 1, ensure that you also set the desired final display dimensions in your image or video tag. For example, if you set c_scale,h_300/dpr_2.0 in your delivery URL, you should also set height=300 in your image tag. Otherwise, the image will be delivered at 2.0 x the requested dimensions (a height of 600px in this example).
While the code example below shows only the transformation URL, the image tag for the displayed inline image includes a hard-coded height definition in the image tag, to ensure that the doubled-DPR is still delivered within a display of 150px. If you view just the transformed dpr_2.0 URL outside the image tag, it displays with a height of 300px.
Delivers the image in a resolution that automatically matches the DPR (Device Pixel Ratio) setting of the requesting device, rounded up to the nearest integer. Only works for certain browsers and when Client-Hints are enabled.
Sets the duration (in seconds) of a video or audio clip.
Can be used independently to trim a video or audio clip to the specified length. This parameter is often used in conjunction with the so (start offset) and/or eo (end offset) parameters.
Can be used as a qualifier to control the length of time for a corresponding transformation.
If you specify more than one effect in a transformation component (separated by commas), only the last effect in that component is applied.
To combine effects, use separate components (separated by forward slashes) following best practice guidelines, which recommend including only one action parameter per component.
You can combine the background_removal effect with other transformation parameters, but the background_removal effect must be the first component in the chain.
With no other action parameters in the same component (as per our best practice guidelines), the background-removed version is saved so that when used for other derived versions of the background-removed asset, the add-on is not called again for that asset.
The first time the add-on is called for an asset, a 423 error response is returned until the processing has completed.
The add-on imposes a limit of 4,194,304 (2048 x 2048) total pixels on its input images. If an image exceeds this limit, the add-on first scales down the image to fit the limit, and then processes it. The scaling does not affect the aspect ratio of the image, but it does alter its output dimensions.
Background removal on the fly cannot currently be used for image overlays. Instead, apply the base image as an underlay.
Background removal on the fly is not supported for fetched images.
Remove the background of an image, specifying the yellow background to be removed, rather than the red border that the algorithm would otherwise choose (e_bgremoval:rgb:ffff00):
Remove the green-screen background of an image (e_bgremoval:screen):
Applies a blurring filter to the region of an image specified by x, y, width and height, or an area of text. If no region is specified, the whole image is blurred.
Causes a video clip to play forwards and then backwards.
Use in conjunction with trimming parameters (duration, start_offset, or end_offset and the loop effect to deliver a classic (short, repeating) boomerang clip.
e_camera[[:up_<vertical position>][;right_<horizontal position>][;zoom_<zoom amount>][;env_<environment>][;exposure_<exposure amount>][;frames_<number of frames>]]
A qualifier that lets you customize a 2D image captured from a 3D model, as if a photo is being taken by a camera.
The camera always points towards the center of the 3D model and can be rotated around it. Specify the position of the camera, the exposure, zoom and lighting to capture your perfect shot.
Use with fl_animated to create a 360 spinning animation.
Capture a PNG image of the cute-kitty 3D model (f_png) with the camera positioned at an angle of 20 degrees below the cat (up_-20) and rotated 45 degrees to the right (right_45):
Create a 360 animation of the cute-kitty 3D model (fl_animated,f_webp) with the camera positioned 60 degrees up and starting at 45 degrees to the right, capturing 36 frames (e_camera:up_60;right_45;frames_36):
Trims pixels according to the transparency levels of a specified overlay image. Wherever an overlay image is transparent, the original is shown, and wherever an overlay is opaque, the resulting image is transparent.
Displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter).
Adds a shadow to the object(s) in an image. Specify the angle and spread of the light source causing the shadow.
Notes
Either:
the original image must include transparency, for example where the background has already been removed and it has been stored in a format that supports transparency, such as PNG, or
the dropshadow effect must be chained after the background_removal effect, for example:
Uses AI to analyze an image and make adjustments to enhance the appeal of the image, such as:
Exposure reduction: Correcting overexposed images, smartly reducing excessive brightness and reclaiming details in bright areas, bringing back a balanced exposure.
Exposure enhancement: Adjusting underexposed images by enhancing dim areas, thus improving overall exposure without compromising the image's natural quality.
Color intensification: Enriching color vividness, making hues more vibrant and lively, thus bringing a more dynamic color range to the image.
Color temperature correction: Adjusting the white balance, correcting color casts and ensuring that the colors in the image accurately reflect their real-world appearance.
During processing, large images are downscaled to a maximum of 4096 x 4096 pixels, then upscaled back to their original size, which may affect quality.
Extracts an area or multiple areas of an image, described in natural language. You can choose to keep the content of the extracted area(s) and make the rest of the image transparent (like background removal), or make the extracted area(s) transparent, keeping the content of the rest of the image. Alternatively, you can make a grayscale mask of the extracted area(s) or everything excluding the extracted area(s), which you can use with other transformations such as e_mask, e_multiply, e_overlay and e_screen.
Notes and limitations:
During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
When you specify more than one prompt, all the objects specified in each of the prompts will be extracted whether or not multiple_true is specified in the URL.
User-defined variables cannot be used for the prompt when more than one prompt is specified.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Extract all the women in the image (e_extract:prompt_woman;multiple_true):
Provide a mask for the woman on the right (e_extract:prompt_the%20woman%20on%20the%20right;mode_mask):
Extract the camera, its straps, and the man (e_extract:prompt_(the%20camera;the%20man;the%20straps%20hanging%20from%20the%20camera)):
Extract the camera, its straps, and the man and invert the result to keep the rest of the content (e_extract:prompt_(the%20camera;the%20man);invert_true):
Replaces the background of an image with an AI-generated background. If no prompt is specified, the background is based on the contents of the image. Otherwise, the background is based on the natural language prompt specified.
For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.
Using different seeds, you can regenerate a background if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
Notes and limitations:
The use of generative AI means that results may not be 100% accurate.
If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Uses generative AI to recolor parts of your image, maintaining the relative shading. Specify one or more prompts and the color to change them to. Use the multiple parameter to replace the color of all instances of the prompt when one prompt is given.
Notes and limitations:
The generative recolor effect can only be used on non-transparent images.
The use of generative AI means that results may not be 100% accurate.
The generative recolor effect works best on simple objects that are clearly visible.
Very small objects and very large objects may not be detected.
During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
When you specify more than one prompt, all the objects specified in each of the prompts will be recolored whether or not multiple_true is specified in the URL.
User-defined variables cannot be used for the prompt when more than one prompt is specified.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Tip
Consider using e_replace_color if you want to recolor everything of a particular color in your image, rather than specific elements.
Example 2: Make the sweater, dog and earring a purple color, represented by the hex sting, 5632a8 (e_gen_recolor:prompt_(sweater;dog;earring);to-color_5632a8):
Example 3: Recolor all the geese in the image to pink (e_gen_recolor:prompt_goose;to-color_pink;multiple_true):
Uses generative AI to remove unwanted parts of your image, replacing the area with realistic pixels. Specify either one or more prompts or one or more regions. Use the multiple parameter to remove all instances of the prompt when one prompt is given.
By default, shadows cast by removed objects are not removed. If you want to remove the shadow, when specifying a prompt you can set the remove-shadow parameter to true.
Notes and limitations:
The generative remove effect can only be used on non-transparent images.
The use of generative AI means that results may not be 100% accurate.
The generative remove effect works best on simple objects that are clearly visible.
Very small objects and very large objects may not be detected.
Do not attempt to remove faces or hands.
During processing, large images are downscaled to a maximum of 6140 x 6140 pixels, then upscaled back to their original size, which may affect quality.
When you specify more than one prompt, all the objects specified in each of the prompts will be removed whether or not multiple_true is specified in the URL.
User-defined variables cannot be used for the prompt when more than one prompt is specified.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Uses generative AI to replace parts of your image with something else. Use the preserve-geometry parameter to fill exactly the same shape with the replacement.
Notes and limitations:
The generative replace effect can only be used on non-transparent images.
The use of generative AI means that results may not be 100% accurate.
The generative replace effect works best on simple objects that are clearly visible.
Very small objects and very large objects may not be detected.
Do not attempt to replace faces, hands or text.
During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Example 2: Replace "the picture" with "a Van Gogh style painting of cornfields", keeping the area of the replacement exactly the same (e_gen_replace:from_the%20picture;to_a%20van%20gogh%20style%20painting%20of%20cornfields;preserve-geometry_true):
Original imageExact area of picture replaced
Example 3: Replace all the rectangle frames with clocks (e_gen_replace:from_rectangle%20frame;to_clock;multiple_true):
Uses generative AI to restore details in poor quality images or images that may have become degraded through repeated processing and compression.
Consider also using the improve effect to automatically adjust color, contrast and brightness, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.
Notes and limitations:
The generative restore effect can only be used on non-transparent images.
The use of generative AI means that results may not be 100% accurate.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Applies a gradient fade effect from the edge of an image. Use x or y to indicate from which edge to fade and how much of the image should be faded. Values of x and y can be specified as a percentage (range: 0.0 to 1.0), or in pixels (integer values). Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). By default, the gradient is applied to the top 50% of the image (y_0.5).
Pad an image to a width of 200 pixels and a height of 150 pixels, with the background color set to the predominant color, and with a gradient fade effect between the added padding and the image (e_gradient_fade:symmetric_pad):
When generating a 2D image from a 3D model, this effect introduces a light source to cast a shadow. You can control the intensity of the shadow that's cast.
Note
You must specify a 2D image file format that supports transparency, such as PNG or AVIF.
Create a 360 animation of the cute-kitty 3D model (fl_animated,f_webp) with the camera positioned 60 degrees up and starting at 45 degrees to the right, capturing 36 frames (e_camera:up_60;right_45;frames_36), and with a shadow intensity of 50 (e_light:shadowintensity_50):
Makes the background of an image or video transparent (or solid white for formats that do not support transparency). The background is determined as all pixels that resemble the pixels on the edges of an image or video, or the color specified by the color qualifier.
Tips
For images with a uniform background, you may also want to try the e_bgremoval effect.
Emphasize the lines in a line drawing of a kitten by applying a plus-shaped kernel of 5 pixels using the dilate method (e_morphology:method_dilate;kernel_plus;radius_5.0):
Apply convolution to a market picture (e_morphology:method_convolve;radius_1.0):
A qualifier that blends image layers using the multiply blend mode, whereby the RGB channel numbers for each pixel from the top layer are multiplied by the values for the corresponding pixel from the bottom layer. The result is always a darker picture; since each value is less than 1, their product will be less than either of the initial values.
Causes all semi-transparent pixels in an image to be either fully transparent or fully opaque. Specifically, each pixel with an opacity lower than the specified threshold level is set to an opacity of 0% (transparent). Each pixel with an opacity greater than or equal to the specified level is set to an opacity of 100% (opaque).
Note
This effect can be a useful solution when PhotoShop PSD files are delivered in a format supporting partial transparency, such as PNG, and the results without this effect are not as expected.
Adds an outline effect to an image. Specify the color of the outline using the co (color) qualifier. If no color is specified, the default outline is black.
A qualifier that blends image layers using the overlay blend mode, which combines the multiply and screen blend modes. The parts of the top layer where the base layer is light become lighter, the parts where the base layer is dark become darker. Areas where the top layer are mid gray are unaffected.
Generates a summary of a video based on Cloudinary's AI-powered preview algorithm, which identifies the most interesting video segments in a video and uses these to generate a video preview.
Converts the colors of every pixel in an image based on a supplied color matrix, in which the value of each color channel is calculated based on the values from all other channels (e.g. a 3x3 matrix for RGB, a 4x4 matrix for RGBA or CMYK, etc).
Maps an input color and those similar to the input color to corresponding shades of a specified output color, taking luminosity and chroma into account, in order to recolor an object in a natural way. More highly saturated input colors usually give the best results. It is recommended to avoid input colors approaching white, black, or gray.
Notes
This transformation only supports non-verbose, ordered syntax, so remember to include the tolerance parameter if specifying from color, even if you intend to use the default tolerance.
Consider using e_gen_recolor if you want to specify particular elements in your image to recolor, rather than everything with the same color.
A qualifier that blends image layers using the screen blend mode, whereby the RGB channel numbers of the pixels in the two layers are inverted, multiplied, and then inverted again. This yields the opposite effect to multiply, and results in a brighter picture.
Blends an image with one or more tint colors at a specified intensity. You can optionally equalize colors before tinting and specify gradient blend positioning per color.
Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Vectorizes an image. The values can be specified either in an ordered manner according to the above syntax, or by name as shown in the examples below.
Notes
To deliver an image as a vector image, make sure to change the format (or URL extension) to a vector format, such as SVG. However, you can also deliver in a raster format if you just want to get the 'vectorized' graphic effect.
Large images are scaled down to 1000 pixels in the largest dimension before vectorization.
Also known as the Ken Burns effect, this transformation applies zooming and/or panning to an image, resulting in a video or animated GIF (depending on the format you specify by either changing the extension or using the format parameter).
You can either specify a mode, which is a predefined type of zoom/pan, or you can provide custom start and end positions for the zoom and pan. You can also use the gravity parameter to specify different start and end areas, such as objects, faces, and automatically determined areas of interest.
Notes
The resulting video or animated GIF does not go outside the bounds of the original image. So, if you specify an x,y position of (0,0), for example, the center of the frame will be as close to the top left as possible, but will not be centered on that position.
The resolution of your image needs to be sufficient for the zoom level that you choose to maintain good quality.
To achieve the best visual quality, the output resolution of the resulting video or animated image should be less than or equal to the input image resolution divided by the maximum zoom level. For example, if your original image has a width of 1920 pixels, and your maximum zoom is 3.2, you should display the resulting video at a width of 600 pixels or less (e.g. chain c_scale,w_600 onto the end of the transformation).
If you apply the zoompan effect to an animated image, the first frame of the animated image is taken as the input.
To achieve a smoother zoom, you can increase the frame rate, extend the length of the time over which the zoom occurs, and reduce the difference between zoom levels at the start and end of the transformation.
The zoompan effect won't work if the resulting video exceeds the limits set for your account. As a general rule, use images that don't exceed 5000 x 5000 pixels.
Currently, you can't use automatic gravity (g_auto) in other transformation components that are chained with the zoompan effect.
Create a ten second GIF (.gif) from an image of a living room, zooming into the right of the image, with a maximum zoom of 6.5, and using the default frame rate (e_zoompan:mode_ztr;maxzoom_6.5;du_10). This also uses the loop effect (e_loop):
Create an eight second MP4 video (.mp4) from a map of the USA, zooming in from a position in the northwest of the USA map (x=300, y=100 pixels), to North Carolina at (x=950, y=400 pixels) (e_zoompan:from_(zoom_2;x_300;y_100);to_(zoom_4;x_950;y_400);du_8;fps_40).
Create a four second MP4 video (.mp4) of a giraffe, zooming out from the center (e_zoompan:from_(zoom_8)). All other parameters are set to default.
Create a seven second MP4 video (.mp4) of a model wearing fashionable items, starting zoomed into the hat (from_(g_hat;zoom_4.5)), then zooming out and panning to the pants (to_(g_pants;zoom_1.6)).
Create an MP4 video (.mp4) that concatenates a pan from northwest to southeast (from_(g_north_west;zoom_2.2);to_(g_south_east;zoom_2.2)) with a pan from northeast to southwest (from_(g_north_east;zoom_2.2);to_(g_south_west;zoom_2.2)).
Create an MP4 video (.mp4) that zooms out from an area of the photo including the house, to the girl e_zoompan:du_6;from_(g_auto:house;zoom_3.4);to_(g_girl;zoom_1.4):
Specifies the last second to include in a video (or audio clip). This parameter is often used in conjunction with the so (start offset) and/or du (duration) parameters.
Can be used independently to trim a video (or audio clip) by specifying the last second of the video to include. Everything after that second is trimmed off.
Can be used as a qualifier to control the timing of a corresponding transformation.
Overlay a small version of the ski_jump video over the dog video, starting from the beginning of the dog video (no start offset), and removing the overlay after 8 seconds - the end offset (eo_6.0):
Converts (if necessary) and delivers an asset in the specified format regardless of the file extension used in the delivery URL.
Must be used for automatic format selection (f_auto) as well as when fetching remote assets, while the file extension for the delivery URL remains the original file extension.
In most other cases, you can optionally use this transformation to change the format as an alternative to changing the file extension of the public ID in the URL to a supported format. Both will give the same result.
Note
In SDK major versions with initial release earlier than 2020, the name of this parameter is fetch_format. These SDKs also have a format parameter, which is not a transformation parameter, but is used to change the file extension, as shown in the file extension examples - #2.
The later SDKs have a single format parameter (which parallels the behavior of the fetch_format parameter of older SDKs). You can use this to change the actual delivered format of any asset, but if you prefer to convert the asset to a different format by changing the extension of the public ID in the generated URL, you can do that in these later SDKs by specifying the desired extension as part of the public ID value, as shown in file extension examples - #1.
Automatically generates (if needed) and delivers an asset in the optimal format for the requesting browser in order to minimize the file size.
Optionally, include a media type to ensure the asset is delivered with the desired media type when no file extension is included. For example, when delivering a video using f_auto and no file extension is provided, the media type will default to an image unless f_auto:video is used.
Note
When used in conjunction with automatic quality (q_auto), sometimes the selected format is not the one that minimizes file size, but rather the format that yields the optimal balance between smaller file size and good visual quality.
Deliver the dog.mp4 video as WebM (VP9) to Chrome browsers, MP4 (HEVC) to Safari browsers, or as an MP4 (H.264) to browsers that support neither of the aforementioned formats:
Deliver the dog video with automatic format selection, ensuring it is delivered as a video when no file extension is used:
Defines an audio layer to be used as an alternate audio track for videos delivered using automatic streaming profile selection. Used to provide multiple audio tracks, for example when you want to provide audio in multiple languages.
Alters the regular video delivery behavior by delivering a video file as an animated image instead of a single frame image, when specifying an image format that supports both still and animated images, such as webp or avif.
When delivering a video and specifying the GIF format (either f_gif or specifying a GIF extension) it's automatically delivered as an animated GIF and this flag is not necessary. To force Cloudinary to deliver a single frame of a video in GIF format, use the page parameter.
Alters the regular behavior of the q_auto parameter, allowing it to switch to PNG8 encoding if the automatic quality algorithm decides that's more efficient.
The apng (animated PNG) flag alters the regular PNG delivery behavior by delivering an animated image asset in animated PNG format rather than a still PNG image. Keep in mind that animated PNGs are not supported in all browsers and versions.
Use with: fl_animated | f_png (or when specifying png as the delivery URL file extension).
Alters the regular delivery URL behavior, causing the URL link to download the (transformed) file as an attachment rather than embedding it in your Web page or application.
Note
You can also use this flag with raw files to specify a custom filename for the download. The generated file's extension will match the raw file's original extension.
The awebp (animated WebP) flag alters the regular WebP delivery behavior by delivering an animated image or video asset in animated WebP format rather than as a still WebP image. Keep in mind that animated WebPs are not supported in all browsers and versions.
Use with: fl_animated | f_webp (or when specifying webp as the delivery URL file extension).
Use the c2pa flag when delivering images that you want to be signed by Cloudinary for the purposes of C2PA (Coalition for Content Provenance and Authenticity).
For images with a clipping path saved with the originally uploaded image (e.g. manually created using PhotoShop), makes everything outside the clipping path transparent.
If there are multiple paths stored in the file, you can indicate which clipping path to use by specifying either the path number or name as the value of the page parameter (pg in URLs).
For images with a clipping path saved with the originally uploaded image, makes pixels transparent based on the clipping path using the 'evenodd' clipping rule to determine whether points are inside or outside of the path.
Trims the pixels on the base image according to the transparency levels of a specified overlay image. Where the overlay image is opaque, the original is kept and displayed, and wherever the overlay is transparent, the base image becomes transparent as well. This results in a delivered image displaying the base image content trimmed to the exact shape of the overlay image.
Trim a (fetched) image of a water drop based on the shape of a text overlay definition (l_text:Unkempt_250_bold:Water/fl_cutter). The text overlay is defined with the desired font and size of the resulting delivered image:
Transform a picture into an old photograph by using an image of torn paper to trim the picture, and again, as a layer with opacity of 40 to achieved a weathered look:
For images, the returned JSON includes the cropping coordinates recommended by the g_auto algorithm.
For videos, the returned JSON includes the cropping confidence score for the whole video and per second in addition to the horizontal center point of each frame (on a scale of 0 to 1) recommended by the g_auto algorithm.
g_<face-specific-gravity>: For images, the returned JSON includes the coordinates of facial landmarks relative to the top-left corner of the original image.
e_preview: For videos, the returned JSON includes an importance histogram for the video.
Return a JSON with the importance histogram that Cloudinary would use to generate a video preview of this ImageCon video (e_preview,fl_getinfo):
Tip
Click the URL to see the JSON response.
Return a JSON with the automatically generated horizontal center points per frame for the specified cropping transformation (g_auto,ar_1,w_400,c_fill/fl_getinfo):
Applies Group 4 compression to the image. Currently applicable to TIFF files only. If the original image is in color, it is transformed to black and white before the compression is applied.
Use with: f_tiff (or when specifying tiff as the delivery URL file extension)
A qualifier that adjusts the behavior of scale cropping. By default, when only one dimension (width or height) is supplied, the other dimension is automatically calculated to maintain the aspect ratio. When this flag is supplied together with a single dimension, the other dimension keeps its original value, thus distorting an image by scaling in only one direction.
Sets the cache-control for an image to be immutable, which instructs the browser that an image does not have to be revalidated with the server when the page is refreshed, and can be loaded directly from the cache. Currently supported only in Firefox.
Cloudinary's default behavior is to strip almost all metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all the copyright-related fields while still stripping the rest of the metadata.
Cloudinary's default behavior is to strip almost all embedded metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all of an image's embedded metadata in the transformed image.
Note
This flag cannot be used in conjunction with q_auto.
A qualifier that enables you to apply chained transformations to an overlaid image or video. The first component of the overlay (l_<image_id>) acts as an opening parentheses of the overlay transformation and the fl_layer_apply component acts as the ending parentheses. Any transformation components between these two are applied as chained transformations to the overlay and not to the base asset.
This flag is also required when concatenating images to videos or concatenating videos with custom transitions.
When used with an animated GIF file, instructs Cloudinary to use lossy compression when delivering an animated GIF. By default a quality of 80 is applied when delivering with lossy compression. You can use this flag in conjunction with a specified q_<quality_level> to deliver a higher or lower quality level of lossy compression.
When used while delivering a PNG format, instructs Cloudinary to deliver an image in PNG format (as requested) unless there is no transparency channel, in which case, deliver in JPEG format instead.
Use with: f_gif with or without q_<quality level> | f_png
(or when specifying gif or png as the delivery URL file extension)
By default, Cloudinary delivers PNGs in PNG-24 format, or if f_auto and q_auto are used, these determine the PNG format that minimizes file size while maximizing quality. In some cases, the algorithm will select PNG-8. By specifying one of these flags when delivering a PNG file, you can override the default Cloudinary behavior and force the requested PNG format.
Generates a JPG or PNG image using the progressive (interlaced) format. This format allows the browser to quickly show a low-quality rendering of the image until the full quality image is loaded.
A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the defined or automatically detected region(s). For example, the region may be the coordinates of an automatically detected face or piece of text, or a custom-defined region. If an image has multiple regions, then the specified overlay image will be overlaid over each identified region at a size relative to the region it overlays.
Place the 'call text' image over each detected text region in the image at 1.1x (110%) of each detected text region. This transformation uses g_ocr_text, which triggers the OCR Text Detection and Extraction Add-on to detect the text regions and pass those back to the transformation on the fly:
A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the base image, rather than relative to the original size of the specified overlay image. This flag enables you to use the same transformation to add an overlay to images that will always resize to a relative size of whatever image it overlays.
A qualifier that takes the image specified as an overlay and uses it to replace the first image embedded in a PDF.
Transformation parameters that modify the appearance of the overlay (such as effects) can be applied. However, when this flag is used, the overlay image is always scaled exactly to the dimensions of the image it replaces. Therefore, resize transformations applied to the overlay are ignored. For this reason, it is important that the image specified in the overlay matches the aspect ratio of the image in the PDF that it will replace.
A qualifier that concatenates (splices) the image, video or audio file specified as an overlay to a base video (instead of placing it as an overlay). By default, the overlay image, video or audio file is spliced to the end of the base video. You can use the start offset parameter set to 0 (so_0) to splice the overlay asset to the beginning of the base video by specifying it alongside fl_layer_apply. You can optionally provide a cross fade transition between assets.
Note
Make sure you read the important notes regarding concatenating media.
Splice the first 5 seconds of the video named kitten_fighting to the beginning of the video named dog rotated by 180 degrees, with both videos set to a width of 300 pixels and a height of 200 pixels (du_5,fl_splice,l_video:kitten_fighting/c_fill,h_200,w_300/fl_layer_apply,so_0):
Splice the first 8 seconds of the video named kitchen, and an image named house-exterior to the first 7 seconds of the video named livingspace using cross fade transitions, circleopen for 2.5 seconds (fl_splice:transition_(name_circleopen;du_2.5)) and fade for one second (the default) (fl_splice:transition):
Like fl_attachment, this flag alters the regular video delivery URL behavior, causing the URL link to download the (transformed) video as an attachment rather than embedding it in your Web page or application. Additionally, if the video transformation is being requested and generated for the first time, this flag causes the video download to begin immediately, streaming it as a fragmented video file.
(Most standard video players successfully play fragmented video files without issue.)
(In contrast, if the regular fl_attachment flag is used, then when a user requests the video transformation for the first time, the download will begin only after the complete transformed video has been generated.)
Note
HLS (.m3u8) and MPEG-DASH (.mpd) files are by nature non-streamable. If this flag is used with a video in one of those formats, it behaves identically to the regular fl_attachment flag.
A qualifier used with text overlays that fails the transformation and returns a 400 (bad request) error if the text (in the requested size and font) exceeds the base image boundaries. This can be useful if the expected text of the overlay and/or the size of the base image isn't known in advance, for example with user-generated content. You can check for this error and if it occurs, let the user who supplied the text know that they should change the font, font size, or number of characters (or alternatively that they should provide a larger base image).
If you deliver the above transformation, but without any flags, the text would extend the overall width of the delivered image to display the text in its entirety.
If you deliver the above transformation with the fl_no_overflow flag instead of the fl_text_disallow_overflow flag, the image would be delivered according to the requested dimensions, and any excess text would be cut off.)
A qualifier used with text overlays that adds a small amount of padding around the text overlay string. Without this flag, text overlays are trimmed tightly to the text with no excess padding.
Truncates (trims) a video file based on the times defined in the video file's metadata (relevant only where the file metadata includes a directive to play only a section of the video).
Instead of delivering the audio or video file, generates and delivers a waveform image in the requested image format based on the audio from the audio or video file. by default, the waveform color is white and the background is black. You can customize these using the co_<color> and b_<color value>
Injects a custom function into the image transformation pipeline. You can use a remote function/lambda as your source, run WebAssembly functions from a compiled .wasm file stored in your Cloudinary product environment, deliver assets based on filters using tags and structured metadata, or filter assets returned when generating a client-side list.
fps_<frames per second>[-<maximum frames per second>]
Controls the FPS (Frames Per Second) of a video or animated image to ensure that the asset (even when optimized) is delivered with an expected FPS level (for video, this helps with sync to audio). Can also be specified as a range.
A qualifier that determines which part of an asset to focus on, and thus which part of the asset to keep, when any part of the asset is cropped. For overlays, this setting determines where to place the overlay.
A qualifier that defines a special position within the asset to focus on.
Note
The only special position that is supported for animated images is custom. If other positions are specified in an animated image transformation, center gravity is applied.
A qualifier to automatically identify the most interesting regions in the asset, and include in the crop.
Notes
Automatic gravity is not supported for animated images. If g_auto is used in an animated image transformation, center gravity is applied, except when c_fill_pad is also specified, in which case an error is returned.
Any custom coordinates defined for a specific image will override the automatic cropping algorithm and only the custom coordinates will be used 'as is' for the gravity, unless you specify 'custom_no_override' or 'none' as the focal_gravity.
Automatically crop an image to a square aspect ratio, based on the areas most likely to attract a person's initial gaze. (ar_1:1,c_fill,g_auto:subject):
Automatically crop an image to a square aspect ratio while focusing on a cat in the image (ar_1:1,c_crop,g_auto:cat,w_1000):
Automatically crop an image using custom coordinates to specify the area of interest to keep (g_auto:aoi_420_230_330_160):
A qualifier to specify a named clipping path in the image to focus on when cropping an image. Works on file formats that can contain clipping paths such as TIFF.
Note
Clipping paths work when the original image is 64 megapixels or less. Above that limit, the clipping paths are ignored.
This is different than using fl_clip,pg_name:front, as the dimensions of the image are cropped to the clipping path, rather than maintaining the size of the original image.
A qualifier to add an image or text layer that tracks the position of a person throughout a video. Can be used with fashion object detection to conditionally add the layer based on the presence of a specified object.
Notes
Only one tracked layer can be applied at a time.
The maximum video duration that tracked layers can be applied to is 3 minutes.
When requesting your video on the fly, you will receive a 423 response until the video has been processed. Once processed, subsequent transformations will be applied synchronously.
You can apply transformations to the layer, such as controlling duration, by adding those into the layer definition component (e.g. l_price_tag,du_3)
Add a sale icon to a product image if both the strings 'sale' and 'in_stock' are among the tags assigned to the image (if_!sale:in_stock!_in_tags/l_sale_icon/c_scale,w_180/fl_layer_apply,g_south_east,x_30,y_30/if_end):
Apply a condition if the width is less than or equal to 400 pixels then fill the image to 220x180 and add a red effect, else if the width is greater than 400 pixels then fill the image to 190x300 and add an oil painting effect (if_w_lte_400/c_fill,h_220,w_180/e_red/if_else/c_fill,h_190,w_300/e_oil_paint/if_end):
Applies a layer over the base asset, also known as an overlay. This can be an image or video overlay, a text overlay, subtitles for a video or a 3D lookup table for images or videos.
In addition to these common overlay transformations, you can apply nearly any supported image or video transformation to an image or video overlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.
Add the overlay with the public ID cloudinary_icon to the video, between 6.5 and 10 seconds, with 50% opacity and a brightness of value 100 (l_cloudinary_icon,so_6.5,eo_10,o_50,e_brightness:100):
Overlays the specified audio track on a base video or another audio track. If you specify a video to overlay, only the audio track will be applied. You can use this to mix multiple audio tracks together or add additional audio tracks when using automatic streaming profile selection.
Add the white Cloudinary logo with URL, https://res.cloudinary.com/demo/image/upload/v1602436129/logos/cloudinary_full_logo_white_small.png (base64 encoded: aHR0cHM6Ly9yZXMuY2xvdWRpbmFyeS5jb20vZGVtby9pbWFnZS91cGxvYWQvdjE2MDI0MzYxMjkvbG9nb3MvY2xvdWRpbmFyeV9mdWxsX2xvZ29fd2hpdGVfc21hbGwucG5n), as an overlay to the first five seconds of the video, offset from the north west corner of the video by (15, 15) pixels (l_fetch:aHR0cHM6Ly9yZXMuY2xvdWRpbmFyeS5jb20vZGVtby9pbWFnZS91cGxvYWQvdjE2MDI0MzYxMjkvbG9nb3MvY2xvdWRpbmFyeV9mdWxsX2xvZ29fd2hpdGVfc21hbGwucG5n/eo_5.0,fl_layer_apply,g_north_west,x_15,y_15):
Applies a 3D lookup table (3D LUT) to an image or video. LUTs are used to map one color space to another. The LUT file must first be uploaded to Cloudinary as a raw file.
Embed subtitle texts from an SRT or WebVTT file into a video. The subtitle file must first be uploaded as a raw file.
You can optionally set the font and font-size (as optional values of your l_subtitles parameter) as well as subtitle text color and either subtitle background color or subtitle outline color (using the co and b/bo optional qualifiers). By default, the texts are added in Arial, size 15, with white text and black border.
Add a text overlay stating "Smile!" (! = %21 escaped) in yellow text with a blue 10 pixel outline stroke, using 100 pixel bold and italic Arial font with 50 pixel letter spacing (co_yellow,l_text:Arial_200_bold_italic_stroke_letter_spacing_50:Smile%21/bo_10px_solid_blue/fl_layer_apply,g_south):
Adjusts the opacity of an asset and makes it semi-transparent.
Note
If the image format does not support transparency, the background color is used instead as a base (white by default). The color can be changed with the background parameter.
A quality level of 100 can increase the file size significantly, particularly for video, as it is delivered lossless and uncompressed. As a result, a video with a quality level of 100 isn't playable on every browser.
Specifies the first second to include in the video (or audio clip). This parameter is often used in conjunction with the eo (end offset) and/or du (duration) parameters.
Can be used independently to trim the video (or audio clip) by specifying the first second of the video to include. Everything prior to that second is trimmed off.
Can be used as a qualifier to control the timing of a corresponding transformation.
Can be used to indicate the frame of the video to use for generating video thumbnails.
Overlay a scaled-down version of the ski_jump video over the dog video, starting from the third second of the dog video--the start offset, and removing the overlay after 6 seconds--the end offset (l_video:ski_jump/c_scale,w_250/eo_6.0,fl_layer_apply,g_north_east,so_3.0):
Automatically select a frame to be used as a thumbnail or poster image for the video (so_auto):
Lets Cloudinary choose the best streaming profile on the fly for both HLS and DASH. You can limit the resolution at which to stream the video by specifying the maximum resolution.
Specifies the streaming profile to apply when delivering a video using HLS or MPEG-DASH adaptive bitrate streaming. Optionally allows for defining subtitles tracks for HLS, which will be defined as part of the manifest file.
In addition to these common underlay transformations, you can apply nearly any supported image transformation to an image underlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.
Add the underlay with the public ID site_bg underneath a transparent WebM which is resized to match the size of the base image (u_site_bg,w_1.0,h_1.0,fl_relative):
Sets the sampling rate to use when converting videos or animated images to animated GIF or WebP format. If not specified, the resulting GIF or WebP samples the whole video/animated image (up to 400 frames, at up to 10 frames per second). By default, the duration of the resulting animated image is the same as the duration of the input, no matter how many frames are sampled from the original video/animated image (use the dl (delay) parameter to adjust the amount of time between frames).
A qualifier that determines how to automatically resize an image to match the width available for the image in a responsive layout. The parameter can be further customized by overriding the default rounding step or by using automatic breakpoints.
w_auto[:<rounding step>][:<fallback width>]
The width is rounded up to the nearest rounding step (every 100 pixels by default) in order to avoid creating extra derived images and consuming too many extra transformations. Only works for certain browsers and when Client-Hints are enabled.
The width is rounded up to the nearest breakpoint, where the optimal breakpoints are calculated using either the default breakpoint request settings or using the given settings.
Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). Values between 0.0 and 1.0 indicate a percentage. Integer values indicate pixels.
The offset of the shadow relative to the image in pixels. Positive values offset the shadow right (x) or down (y). Negative values offset the shadow left (x) or up (y).
Position an overlay that is offset horizontally and vertically by 15% from the north east corner(c_thumb,g_face,h_150,w_150/l_badge/c_scale,w_0.08/fl_layer_apply,g_north_east,x_0.15,y_0.15).
A qualifier that controls how close to crop to the detected coordinates when using face-detection, custom-coordinate, or object-specific gravity (when using the Cloudinary AI Content Analysis addon).
When used with thumb resize mode, the detected coordinates are scaled to completely fill the requested dimensions and then cropped as needed.
When used with the crop resize mode, the zoom qualifier has an impact only if resize dimensions (height and/or width) are not specified. In this case, the crop dimensions are determined by the detected coordinates and then adjusted based on the requested zoom.
Define a user-defined variable called $width and set it to 150 pixels ($width_150). Then pass the value of the variable to the named transformation, passport_photo, which references the $width variable (t_passport_photo):
✔️ Feedback sent!
✖️
Error
Unfortunately there's been an error sending your feedback.