Engineering Project Cost Estimation

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Bach

    Executive Leadership (COO / MD) | Scaling PV & BESS Platforms & Organizations | EPC, CAPEX & Operations | From Strategy to Reality

    12,840 followers

    If you benchmark projects on €/kWp, you miss the point. The real metric is €/MWh. In practice, I keep running into the same discussions: How do you compare Project A (say, in Eastern Europe) with Project B (say, in Southern Europe), when grid, construction, O&M or financing have totally different cost profiles? Instead of arguing over individual cost items, there’s a simpler way: look at LCOE (€/MWh). What really matters (short & clear): --> €/kWp = construction indicator, but not a success factor. --> LCOE (€/MWh) captures CAPEX, OPEX, performance (PR/degradation), financing & lifetime. --> A “more expensive” project can deliver cheaper power thanks to higher yield, longer lifetime, or better financing. --> Investors and banks already benchmark on €/MWh, not €/kWp. Number flavor (utility scale, all-in incl. EPC, development, financing): -->Typical Utility Scale DE/CEE (2024): ~560–600 €/kWp all-in -->Project A: 580 €/kWp, PR 80%, WACC 6%, 25 years -> ~49-52 €/MWh -->Project B: 640 €/kWp, PR 87%, WACC 5%, 30 years -> ~40-43 €/MWh --> Same installed capacity, different assumptions –> output beats input. Do you still benchmark projects on €/kWp? Or already on €/MWh? And which 3 variables move your LCOE the most: PR, WACC, O&M, degradation? #AndreasBach #LCOE #SolarPV #ProjectFinance #CleanEnergy

  • View profile for Da Yan

    Full Professor at Tsinghua University, Editor-in-Chief of Building Simulation Journal, IBPSA Fellow

    4,629 followers

    Building Simulation cover article Informing electrification strategies of residential neighborhoods with urban building energy modeling Electrifying end uses is a key strategy to reducing GHG emissions in buildings. However, it may increase peak electricity demand that triggers the need to upgrade the existing power distribution system, leading to delays in electrification and needs of significant investment. There is also concern that building electrification may cause an increase of energy costs, leading to further energy burden for low-income communities. This study uses the urban scale building modeling tool CityBES to assess the electrification impacts of more than 43,000 residential buildings in a neighborhood of Portland, Oregon, USA. Energy efficiency upgrades were investigated on their potential to mitigate the increase of peak electricity demand and energy burden. Simulation results from the calibrated EnergyPlus models show that electrification with heat pumps for space heating and cooling as well as for domestic water heating can reduce CO2e emissions by 38%, but increase peak electricity demand by about 9% from the baseline building stock. Combining electrification measures and energy efficiency upgrades can reduce CO2e emissions by 48% while reducing peak electricity demand by 6% and saving the median household energy costs by 28%. City and utility decision makers should consider integrating energy efficiency upgrades with electrification measures as an effective residential building electrification strategy, which significantly reduces carbon emissions, caps or even decreases peak demand while reducing energy burden of residents. Details of the research can be found at https://lnkd.in/gSCi-W3k The article is co-authored by Tianzhen Hong, Sang Hoon Lee, Wanni Zhang, Han Li, Kaiyu Sun & Joshua Kace             #BuildingSimulation #CityBES #decarbonization #electrification #cover

  • View profile for Anshuman Mishra

    ML @ Zomato

    26,525 followers

    “Just rent a GPU for training” Until you need: - Multi-node training for 70B+ models - $5/hour per GPU (not $30/hour) - 90%+ GPU utilization Then you build your own ml infra. Here’s the reality: Most ML engineers think training infrastructure = - Rent some A100s - Install PyTorch - Run training script - Scale with more GPUs The pain starts around 8 GPUs. Remember: You’re not training ONE model on ONE GPU. You’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing. That’s a scheduling problem, not a training problem. What you actually need: > Job scheduler that understands GPU topology > Distributed checkpoint manager that doesn’t waste bandwidth > Network fabric optimized for all-reduce > Elastic training that handles node failures This is the actual platform. Your training cost breakdown at scale: > Compute: $10/GPU-hour (you pay $30 on cloud) > Data transfer: $2/TB (kills you with large datasets) > Storage: $0.02/GB-month (checkpoints add up fast) > Network: Included (but becomes bottleneck) The hidden cost? Idle GPU time while debugging. The first principle of distributed training: Bandwidth >> Compute for models over 10B params Ring all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput. This is why “just add more GPUs” plateaus. Training Llama 70B: - 140GB model weights - Optimizer states: 280GB - Checkpoints every 1K steps - 30 checkpoints = 12.6TB One training run = $250 in storage. You run 50 experiments/month. “We need to train 10 models simultaneously with different hyperparameters” Now your platform needs: > Gang scheduling for multi-GPU jobs > Spot instance preemption handling > Shared dataset caching across jobs > Priority queues with fairness 90% of DIY platforms can’t do this. > Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup. > Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month. The actual math: AWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year Your bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500 Plus $200K engineer, $50K maintenance. Break-even: 18 months. Production training platforms have four layers: - Orchestration (job queue, gang scheduler, resource manager). - Execution (distributed runtime, checkpoint manager, fault handler). - Storage (dataset cache, checkpoint store, artifact registry). - Telemetry (GPU util, training metrics, cost per epoch). Most build layer 2, skip the rest. That’s it. Building training infrastructure is a 9-month project with upfront hardware costs. But at 100+ training runs/month? ROI in 12 months. #ml #gpu #llm #infra #cloud #nvidia #inference #aws #cloud #ai

  • View profile for Ashish Patel 🇮🇳

    Sr Principal AI Architect at Oracle | Generative AI Expert & Strategist | xIBMer | Author: Hands-on Time Series Analytics with Python | IBM Quantum ML Certified | 15+ Yrs AI | IIMA | 100K+ Followers | 6x LinkedInTopVoice

    103,806 followers

    Training a 405B LLM took 16,000 GPUs and 61 days—here’s the real math behind it. Alright, every ML engineer has been there. You’re sitting in a meeting, and someone drops the classic, "So… how long will it take to train this model?" At first, I had no idea how to answer it, and when I tried finding answers, most articles threw a ton of jargon without giving me the actual numbers I needed. Frustrating, right? I decided to dig into it myself and figured out how to do a rough back-of-the-napkin calculation that actually works. Let’s break down the key stuff. 𝗧𝗵𝗲 𝗠𝗮𝘁𝗵: (https://lnkd.in/dWvgWvXM) ▸ It’s all about FLOPs (floating point operations) and GPU power. Basically, you calculate how many FLOPs your model and data require, then divide it by how much power your GPU setup can handle. ▸ For example, the LLaMA 3.1 model has 405 billion parameters and was trained on 15.6 trillion tokens. In plain English: that means it needed 3.8 × 10²⁵ FLOPs to train (yep, that's an insane number). ▸ To train this beast, they used 16,000 H100 GPUs, each working at about 400 teraflops per second. But here’s the catch—not all GPUs run at full speed. In fact, in this case, the GPUs were running at 38% efficiency due to various network and memory bottlenecks. So, how long did it take to train? Let’s calculate it: 3.8 × 10²⁵ FLOPs ÷ 6.4 × 10¹⁸ FLOPs per second = 61 days But, What About the Cost? 💸 This part always gets me. It took 26 million GPU hours to train LLaMA 3.1. With each GPU costing about $2/hour, the total came out to $52 million! That’s not a typo. I know, it’s wild. 𝗦𝗼, 𝗛𝗼𝘄 𝗗𝗼 𝗬𝗼𝘂 𝗘𝘀𝘁𝗶𝗺𝗮𝘁𝗲 𝗬𝗼𝘂𝗿 𝗢𝘄𝗻 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗧𝗶𝗺𝗲? ✦ Total FLOPs – Calculate the size of your model (parameters) and multiply it by the size of your dataset (tokens). The formula’s in the images I shared. ✦ GPU FLOPs – Find out how many FLOPs your GPUs can handle (and be honest about that efficiency drop!). ✦ Do the division – FLOPs needed ÷ GPU power = training time. The cool part? Once you know how to calculate this, you stop guessing and start making solid predictions, avoiding those "ummm, not sure" moments with the team. Lessons from LLaMA 3 ⇉ If training a 405B parameter model takes 16,000 GPUs and 61 days, scaling up or down from there gets pretty straightforward. But be warned: don’t just trust theoretical max numbers for your hardware. Use the real-world throughput (MFU) you’re actually getting, or you’ll end up way off. ⇉ This method isn’t flawless, but it’s miles better than guessing. When you’re dropping millions on GPU hours, you definitely want more than a ballpark guess. Would love to hear your thoughts if you've run similar calculations or hit similar roadblocks! Let’s get a conversation going. 👇 #LLMs #DataScience #ArtificialIntelligence

  • View profile for AVINASH CHANDRA (AAusIMM)

    Exploration Geologist at International Resources Holding Company (IRH), Abu Dhabi, UAE.

    8,951 followers

    Understanding Operating Costs in Mining Operating costs are fundamental to the economic viability of mining projects. A detailed understanding of these costs is essential for accurate financial modeling, ensuring profitability, and assessing project sustainability. Below is a concise breakdown of key operating cost . 1. Mining Operating Costs Mining operating costs involve the expenses incurred in the extraction of ore. These costs are determined by factors such as mining method (open-pit or underground), ore body characteristics, and scale of operation. Labor Costs: Wages for personnel involved in drilling, blasting, hauling, and supervision. Equipment & Maintenance: Costs of machinery (e.g., haul trucks, drills) and associated upkeep. Ore Extraction: The choice of mining method influences extraction costs, including drilling, blasting, and hauling. Material Handling: Transporting ore to the processing plant, often using haul trucks or conveyors 2. Processing Operating Costs Once the ore is mined, it must undergo processing to extract the valuable metals. This phase includes various technical processes such as crushing, grinding, flotation, and refining Energy Consumption: Power for grinding, flotation, and other mechanical processes Reagents & Consumables: Chemicals used for ore treatment and extraction Labor & Equipment: Costs of plant operators, technicians, and maintaining processing machinery Water Management: Water usage for processing and environmental treatment 3. General & Administrative (G&A) Costs G&A costs cover the non-operational expenses necessary for the overall administration of the mining project Corporate Salaries: Compensation for management, finance, and administrative personnel Office & Facility Expenses: Rent, utilities, and office supplies Regulatory Compliance: Costs for environmental and safety compliance, licenses, and audits 4. Royalties and Social Commitments Royalties and social commitments are payments made to governments, landowners, or local communities Royalties: A percentage of revenue or mineral value paid to governments or landowners. Community Development: Investments in local infrastructure, environmental programs, and public welfare projects 5. Other Operational Costs Additional costs necessary for the efficient operation of the mine are included here Environmental Management: Waste disposal, land reclamation, and water treatment Transportation & Logistics: Moving ore or concentrate to processing facilities or markets Site Infrastructure: Development and maintenance of roads, power, and communication networks A comprehensive understanding of operating costs—including mining, processing, G&A, royalties, and other operational expenses—is crucial for accurate financial modeling in mining. Proper cost management enables efficient resource allocation and supports long-term project sustainability. #MiningOperations #OperatingCosts #MiningFinance #Geology

  • View profile for Sudam Behera

    Head of Mines @ Rashmi Group

    20,035 followers

    Thumb rules for mining feasibility studies: Minimum IRR- 15-20% for mining projects due to high risk NPV requirement: Must be positive with reasonable probability of success [ Payback period: Target <5 years for capital recovery, <7 years maximum -Discount rate**: 8-12% for established mines, 12-20% for development projects -Project sensitivity: ±10% commodity price change affects NPV by ±25-35% Minimum ore reserves*: 8-15 years mine life for economic viability Reserve confidence: 80% measured + indicated resources minimum for feasibility Resource to reserve conversion-70-85 Grade distribution**: P50 grade should exceed cut-off grade by 20-30% Tonnage requirement: Minimum 5-10 Mt,(OC) 2-5 Mt(UG) -Accuracy level: Scoping ±50%, Pre-feasibility ±30%, Feasibility ±15% -Cost escalation*: Add 15-25% contingency for development projects -*Infrastructure costs: 20-40% of total CAPEX for remote locations -Working capital**: 10-20% of annual operating costs *Sustaining CAPEX**: 5-15% of annual revenue during operations Mining costs**: $3-12/tonne for open pit, $15-50/tonne for underground Processing costs**: $8-30/tonne -*G&A costs**: 8-15% of total operating costs -Power costs**: 15-25% of total operating costs for processing Labor costs**: 25-40% of OPEX -Annual production**: 10-15% of total ore reserves for optimal economics - **Plant utilization**: Design for 85-90% availability, 5000+ hours/year - **Metal price assumptions**: Use 3-5 year trailing average or consensus forecasts - **Revenue recognition**: Apply 2-5% discount to spot prices for concentrate sales - **Tax rate**: 25-35% corporate tax plus royalties (typically 2-8%) - **Closure costs**: 3-8% of total CAPEX for reclamation - **Commodity price volatility**: Test ±30% price scenarios Permitting risk**: Add 1-3 years - **Market size**: Project production should be <5% of global market - **Transportation costs**: $20-100/tonne depending on distance and mode - **Treatment charges**: $50-200/tonne for complex concentrates - **Marketing reach**: Identify 3+ potential buyers - **Debt capacity**: Maximum 50-70% debt-to-equity ratio for mining projects - **Debt service coverage**: Minimum 1.3-1.5× annual cash flow to debt payments - **Equity requirement**: 30-50% equity contribution typical - **Cost of capital**: 10-15% weighted average cost of capital (WACC) - **Development financing**: Expect 2-3% higher interest rates during construction - **Scoping to PFS**: Increase resource confidence by 50%, reduce cost uncertainty to ±30% - **PFS to FS**: Complete metallurgical testing, finalize engineering design - **Study costs**: Scoping $0.5-2M, PFS $3-10M, FS $10-50M depending on project size - **Timeline**: Scoping 3-6 months, PFS 6-12 months, FS 12-24 months - **Scoping study**: IRR >20%, conceptual economics positive - **Pre-feasibility**: IRR >15%, NPV >$100M, technical risks identified - **Feasibility**: IRR >12%, detailed engineering complete, financing secured #thumbrules

  • View profile for Dr.Mohamed Tash

    Decarbonization & Energy Strategy Executive | Helping Industrial Giants Reach Net-Zero via AI-Driven Sustainability | Doctorate in Environmental Science | Top 1% Voice in Energy.

    23,882 followers

    Ever wonder how companies actually decide whether an energy-saving project is worth the money? The golden standard is the Internal Rate of Return (IRR) – it’s the single most powerful metric in engineering economics. In simple terms:  The IRR tells you the annual return a project generates over its lifetime.  If the IRR is higher than your company’s Minimum Attractive Rate of Return (MARR – basically your cost of capital or hurdle rate), the project makes financial sense. The bigger the gap, the better the investment. Other key concepts in play:  - Net Present Value (NPV) = 0 at the IRR  - Uniform Annual Series (A) – same savings every year  - (P/A, i%, n) factor – converts annual amounts to today’s dollars  - MARR – the minimum return your company will accept (often 8–15% depending on risk) Now, let’s see this in action with a manufacturing example : Project: Energy Conservation Measure (ECM)  - Initial cost: $100,000  - Annual energy savings: $23,400  - Life: 12 years  - Company MARR: 12% Using the classic (P/A) factor method:  Required factor = 100,000 ÷ 23,400 = 4.2735  From interest tables → this falls between 20% and 21%  After interpolation (or Excel IRR function) → IRR = 20.7% That’s nearly 9 percentage points above MARR — an absolute no-brainer. Bottom line: This project doesn’t just pay back… it delivers outstanding returns with a huge safety margin. Check the infographic below for the full step-by-step calculation — perfect if you’re preparing for PE pr CEM exams. #EngineeringEconomy #IRR #EnergyEfficiency #CapitalProjects #Sustainability #Manufacturing #PEexam #EnergyManagement

  • View profile for Marcos de Paiva Bueno

    Founder & CEO | PhD in Mineral Processing | Process Optimization | Strategic Leadership

    7,819 followers

    How to achieve full comminution characterization of an entire ore body for under €6.5 per metre? By strategically testing 1% of your drilling assay samples and leveraging AI for the remaining 99%, it can entirely change the economic. You’re already spending approximately €550 per metre just to drill and assay core samples: €500 for drilling and €50 for assays. Yet, the most critical data for mill design, mine scheduling, and NPV forecasting usually remains incomplete: comminution characterization. Currently, the mining industry still relies on expensive, infrequent comminution tests performed on large composite samples, typically representing less than 0.1% of total drilled core. This means multi-million-euro feasibility studies rest on hardness assumptions that scarcely capture the orebody’s real variability. At Geopyörä, we've developed a solution that changes this completely. For only €6.45 per metre, exploration and mining projects can achieve optimized CAPEX and improved NPV forecasts, significantly reducing risks through greater accuracy. How is it possible? - Select just 1% of assay samples as reference samples, strategically chosen to capture geological variability. - Conduct the Geopyörä Breakage Test on these reference samples to obtain critical comminution parameters. - Reconcile these reference samples (no contamination) and submit them for standard geochemical assays. - Perform geochemical, mineralogical assays, or core scanning on 100% of the samples. - Use AI-driven Geomet modeling to accurately infer comminution parameters from geochemical and mineralogical data for the remaining 99% of samples. This approach generates comprehensive, high-volume comminution data to populate your block model, significantly improving resource definition. Now, your decisions on mill selection, mine scheduling, process optimization, and financial forecasting can be fully data-driven. Let's put the economics clearly into perspective: Drilling: €500 per metre (90%) Assays: €50 per metre (9%) Comminution data (Geopyörä testing + AI inference): €6.45 per metre (1%) This additional 1% investment changes everything. Because while assay data tells you what's in the ore, comminution data tells you precisely how the ore will perform in your plant. Geopyörä makes this insight accessible, scalable, and standard. We’ve partnered with reputable labs like ALS and SGS, and our methods have been validated against industry-standard tests such as SMC and JKDWT. So when we say €6.45 per metre, we’re offering more than just data—we're providing a comprehensive decision-making model: One that connects geology directly to processing. One that protects the significant investment you've already made. One that allows you to design, schedule, and blend confidently, without drilling a single unnecessary metre. You’ve already spent 99% of your budget collecting the data. This last 1% tells you exactly how to leverage it.

  • View profile for Mikolaj Budzanowski

    Low Cost Energy Solutions for Industry | GreenIndustry | Green Technology | Green Energy

    4,772 followers

    Can artificial intelligence reduce the cost of heating electrification and fossil fuel consumption in the building sector? The Y Combinator-backed startup Kapacity.io is revolutionizing the decarbonization of buildings using AI-generated efficiency savings to propel the shift to clean energy in commercial real estate. Their mission is clear: move away from fossil fuels and optimize heating and cooling systems using machine learning. Key highlights from Kapacity.io: 🏢 The startup offers incentives to building owners/occupiers to transition to clean energy by integrating AI into buildings' HVAC systems and electricity meters. Real-time adjustments to heating/cooling systems not only cut energy and CO2 emissions but also generate revenue for building owners. 🔗 Kapacity.io uses demand response, allowing electricity consumers to get paid for adjusting their energy consumption based on utility company demand. This approach supports the electrification of buildings by making investments more lucrative and shifting consumption away from fossil fuels. “For example if there is a lot of wind power production and suddenly the wind drops or the weather changes and the utility company is running power grids they need to balance that reduction — and the way to do that is either you can fire up natural gas turbines or you can reduce power consumption… Our product estimates how much can we reduce electricity consumption at any given minute. We are [targeting] heating and cooling devices because they consume a lot of electricity”, said startup’s owners. 🌬️ With a spotlight on heating and cooling devices, Kapacity.io addresses significant electricity consumption. The AI algorithms aim for dynamic adjustments without compromising thermal comfort, contributing to efficiency services such as peak shaving. 🌍 While currently focusing on larger buildings, including multifamily and commercial structures, Kapacity.io eyes expansion into residential buildings. The startup sees potential in global markets, with early targets being the European Union, the U.K., and the U.S. 📊 In early pilots, Kapacity.io claims a 25% reduction in electricity costs and a 10% reduction in CO2 emissions. The startup's approach aligns with the worldwide effort to tackle climate change and emphasizes the importance of shifting towards renewable energy sources. 🚀 Kapacity.io anticipates playing a role even in a scenario where all buildings shift to 100% renewable power systems. Their control software could continue to generate energy cost savings, contributing to a faster transition to a renewable energy system. The application of AI in building optimization, as demonstrated by Kapacity.io, showcases the potential for technology to drive sustainable practices and accelerate the global shift towards greener energy solutions. Read the full story: https://t.ly/q4ohx Mikolaj Budzanowski Boryszew Green Energy & Gas #ai #sustainability #cleanenergy #buildingoptimization #renewablefuture

  • View profile for Robert Speht, MBA

    Energy Strategy & Development Leader | Offshore & Floating Wind | Investment & Market Entry | Public–Private Capital | UK–EU–International

    37,038 followers

    What’s gone wrong for offshore wind in the UK? A detailed and thoughtful article setting out the challenges, problems and limits facing the UK's offshore wind sector. A major factor is rising costs: The dramatic and seemingly unstoppable fall in offshore wind prices created its own momentum. As prices continued to make a mockery of forecasts there came to be an assumption they would fall continually. Firms bid in at low prices to secure the contracts and to establish themselves in the market. It turns out that there may have been limits. The picture remains murky but a few things seem to have caused costs to spike. In what energy consultancy Wood Mackenzie called a perfect storm wiping out wind profits: 🌬️ There may have been a fall-off in learning rates (price falls resulting in firms learning how to do something better) 🌬️ A rise in labour costs hitting all industries. 🌬️ A rise in materials costs that Carbon Brief estimates comprises double figure increases for most metallic components 🌬️ A rise in energy prices hitting all industries 🌬️ Growing pains (Siemens energy has reported faults on turbines forcing write downs and the firm’s value to fall 37%). With 800GWH targeted around the world, production needs to scale up at impossibly rapid levels and this is costly. 🌬️ Rising interest rates. We do not need to pay for wind so pretty much the entire cost of a windfarm is large initial capital outlay. Firms normally borrow to secure this, expecting to recoup it once the project starts generating. This meant the industry was hit hard by the 10 fold rise in borrowing costs. Read the full article: https://lnkd.in/e3NJxpmb

Explore categories