Skip to content

[BUG] User can break entire workflow by clearing /tmp/ #16695

@TomFrankly

Description

@TomFrankly

Describe the bug
User footgun: I wrote a quick action step to fully clear my temp directory as I was getting out-of-memory errors and I wondered if clearing the temp directory and doing a full reset of the Node.js process would help things.

Afterwards, all subsequent runs of my workflow hit this error.

Error
ENOENT: no such file or directory, open '/tmp/pdg/dist/code/6cd3c8c47094f4895a998082ceaee33c523cf9227303fea2005dc38619199bf8/package.json'

To Reproduce
Steps to reproduce the behavior:

Write and run an action step with the following code:

// To use any npm package, just import it
// import axios from "axios"

import { exec } from "child_process";
import { promisify } from "util";
import fs from "fs";

const execAsync = promisify(exec);

export default defineComponent({
    async run({ steps, $ }) {
        console.log("Starting comprehensive cleanup...");
        
        try {
            // 1. Kill any running FFmpeg processes
            console.log("Killing any running FFmpeg processes...");
            try {
                await execAsync("pkill -f ffmpeg");
                console.log("FFmpeg processes terminated");
            } catch (error) {
                console.log("No FFmpeg processes found or error killing them:", error.message);
            }

            // 2. Clear the /tmp directory
            console.log("Clearing /tmp directory...");
            try {
                // List all files in /tmp
                const files = await fs.promises.readdir("/tmp");
                
                // Delete each file except __pdg__ directory
                for (const file of files) {
                    try {
                        const filePath = `/tmp/${file}`;
                        
                        const stats = await fs.promises.stat(filePath);
                        
                        if (stats.isDirectory()) {
                            await execAsync(`rm -rf "${filePath}"`);
                        } else {
                            await fs.promises.unlink(filePath);
                        }
                    } catch (error) {
                        console.log(`Error deleting ${file}:`, error.message);
                    }
                }
                console.log("Temporary files cleared (preserving Pipedream files)");
            } catch (error) {
                console.log("Error clearing /tmp:", error.message);
            }

            // 3. Clear Node.js process memory
            console.log("Clearing Node.js process memory...");
            if (global.gc) {
                global.gc();
                console.log("Garbage collection completed");
            } else {
                console.log("Garbage collection not available");
            }

            // 4. Clear any remaining child processes
            console.log("Clearing any remaining child processes...");
            try {
                await execAsync("pkill -P $$");
                console.log("Child processes cleared");
            } catch (error) {
                console.log("No child processes found or error clearing them:", error.message);
            }

            console.log("Cleanup completed successfully");
            return {
                status: "success",
                message: "Execution environment has been reset",
                timestamp: new Date().toISOString()
            };
        } catch (error) {
            console.error("Error during cleanup:", error);
            return {
                status: "error",
                message: error.message,
                timestamp: new Date().toISOString()
            };
        }
    }
});

Screenshots
Image

Additional context

I'm trying to figure out an issue where sometimes my workflow will get an out of memory error, even when it's processing a very small file that should have absolutely no issue being processed, given the workflow's RAM settings.

This started happening consistently after I ran a test on a particularly large file that caused my workflow to time out.

My workflow uses FFmpeg, so I was wondering if the timeout had cut the execution environment short before it could fully clean up the FFmpeg process, or perhaps something in the temp directory, thereby causing all subsequent runs to have a memory leak.

I wrote this action step to hopefully clear everything out and reset the workflow since that one failed run seemed to be negatively affecting all of my runs afterward.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtracked internallyIssue is also tracked in our internal issue trackertriagedFor maintainers: This issue has been triaged by a Pipedream employee

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions