Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Hereâs code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applicationsâ results. If youâre interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
Project Management
Explore top LinkedIn content from expert professionals.
-
-
If I were starting a new PROJECT today and wanted to plan it with ZERO prior knowledge, I'd do this: Step 1: Define Your Objective ⢠Clearly articulate what success looks like for the project. ⢠Break down the high-level goal into smaller, manageable milestones. ⢠Ensure the objective aligns with stakeholders' expectations to avoid misalignment later. Step 2: Build Your Plan Backwards and Leverage Historical Data Most people skip this step entirely. But this is a huge mistakeâbecause you risk creating a plan that doesnât align with deadlines, resources, or realistic expectations. Hereâs how: ⢠Start from the final deliverable and work backward to define the timeline. ⢠Gather and review historical data or similar project examples to understand typical timelines and challenges. ⢠Identify key dependencies and create a logical sequence for tasks. ⢠Use project planning tools (like Gantt charts or Kanban boards) to visualize your plan. ⢠Clearly define roles and responsibilities for each stage. Pro tip: Donât forget to account for buffer timeâprojects rarely go 100% as planned. Step 3: Identify Risks and Create a Mitigation Plan This isn't easy. But if you can do this, you will get: ⢠Clarity on potential roadblocks before they derail progress. ⢠Stakeholder confidence in your ability to deliver. ⢠A proactive, problem-solving mindset that boosts your credibility. Here's a quick way to do this: List out possible risks, evaluate their impact and likelihood, and create a plan to minimize or respond to them. Collaborate with your team to spot any blind spots. Don't skip this step. It took me months of trial and error (and some chaos) to crystallize these stepsâhope this helps! ð
-
As a junior lawyer, I had to learn how to make it easy for supervisors to review my work. In case it helps, here's a step-by-step guide (with an example): 1ï¸â£Make it clear what the matter / document is and when input is needed. 2ï¸â£ Set out the context and approach to preparing the deliverable What needs to be reviewed, how was it prepared, and whatâs the timeline? If you're attaching a document, include the live link to your file management platform (e.g. iManage or Sharepoint) as well as a static version. 3ï¸â£ Set out the next steps and your ask Make it clear what your supervisor needs to review. Set this out at the top of your email and proactively provide some recommendations. You can also follow up in person to make sure deadlines aren't missed. 4ï¸â£ Explain how the draft is marked up Make it easy to navigate with specific questions (either in the document or extracted in the email). If there are mark ups against a particular document / version, identify what that is. 5ï¸â£ Summarise your inputs Let them know what your draft reflects, and attach the relevant inputs so they can see everything in one place. This will give your supervisor confidence that you've captured everything, and make it easier for them to check your work. 6ï¸â£ Flag key aspects / assumptions If there are key assumptions / principles that have a big impact on how your draft is prepared, it's helpful to set them out in the email as a point of focus. Try to also set out the relevant clause / section / reference where possible. Is there anything else that you'd add? What else have you found helpful in making drafts easier to review, either as a junior lawyer or a supervisor? ------ Btw, if you're a junior lawyer looking for practical career advice - check out the free how-to guides on my website. You can also stay updated by sending a connection / follow. #legalprofession #lawyers #lawstudents #lawfirms
-
ð§ð¼ð±ð®ð, ð£ð ð ð¿ð²ð¹ð²ð®ðð²ð ððµð² ð³ð¶ð¿ðð ð¿ð²ððð¹ðð ð³ð¿ð¼ðº ððµð² ð¹ð®ð¿ð´ð²ðð ðððð±ð ðð²âðð² ð²ðð²ð¿ ð°ð¼ð»ð±ðð°ðð²ð± - ð¼ð» ð® ðð¼ð½ð¶ð° ððµð®ð ð¶ð ð°ð¿ð¶ðð¶ð°ð®ð¹ ðð¼ ð¼ðð¿ ð½ð¿ð¼ð³ð²ððð¶ð¼ð»: ð£ð¿ð¼ð·ð²ð°ð ð¦ðð°ð°ð²ðð. ð Read the report: https://lnkd.in/ekRmSj_h With this report, we are introducing a simple and scalable way to measure project success. A successful project is one that ð±ð²ð¹ð¶ðð²ð¿ð ðð®ð¹ðð² ðð¼ð¿ððµ ððµð² ð²ð³ð³ð¼ð¿ð ð®ð»ð± ð²ð ð½ð²ð»ðð², as perceived by key stakeholders. This clearly represents a shift for our profession, where beyond execution excellence we also feel accountable for doing anything in our power to improve the impact of our work and the value it generates at large. The implications for project professionals can be summarized in a framework for delivering ð ð¢ð¥ð success: ðð anage Perceptions For a project to be considered successful, the key stakeholders - customers, executives, or others - must perceive that the projectâs outcomes provide sufficient value relative to the perceived investment of resources. ðð¢wn Project Success beyond Project Management Success Project professionals need to take any opportunity to move beyond literal mandates and feel accountable for improving outcomes while minimizing waste. ðð¥elentlessly Reassess Project Parameters Project professionals need to recognize the reality of inevitable and ongoing change, and continuously, in collaboration with stakeholders, reassess the perception of value and adjust plans. ððxpand Perspective All projects have impacts beyond just the scope of the project itself. Even if we do not control all parameters, we must consider the broader picture and how the project fits within the larger business, goals, or objectives of the enterprise, and ultimately, our world. I believe executives will be excited about this work. It highlights the value project professionals can bring to their organizations and clarifies the vital role they play in driving transformation, delivering business results, and positively impacting the world. The shift in mindset will encourage project professionals to consider the perceptions of all stakeholders- not just the c-suite, but also customers and communities. To deliver more successful projects, business leaders must create environments that empower project professionals. They need to involve them in defining - and continuously reassessing and challenging - project value. Leverage their expertise. Invest in their work. And hold them accountable for contributing to maximize the perception of project value at all phases of the project - beyond excellence in execution. ð Please read the report, reflect on its findings, and share it broadly. And comment! Project Management Institute #ProjectSuccess #PMI #Leadership #ProjectManagementToday
-
When working with multiple LLM providers, managing prompts, and handling complex data flows â structure isn't a luxury, it's a necessity. A well-organized architecture enables: â Collaboration between ML engineers and developers â Rapid experimentation with reproducibility â Consistent error handling, rate limiting, and logging â Clear separation of configuration (YAML) and logic (code) ðð²ð ðð¼ðºð½ð¼ð»ð²ð»ðð ð§ðµð®ð ðð¿ð¶ðð² ð¦ðð°ð°ð²ðð Itâs not just about folder layout â itâs how components interact and scale together: â Centralized configuration using YAML files â A dedicated prompt engineering module with templates and few-shot examples â Properly sandboxed model clients with standardized interfaces â Utilities for caching, observability, and structured logging â Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems â whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. â Whatâs your go-to project structure when working with LLMs or Generative AI systems? Letâs share ideas and learn from each other.
-
Should you try Googleâs famous â20% timeâ experiment to encourage innovation? We tried this at Duolingo years ago. It didnât work. It wasnât enough time for people to start meaningful projects, and very few people took advantage of it because the framework was pretty vague. I knew there had to be other ways to drive innovation at the company. So, here are 3 other initiatives weâve tried, what weâve learned from each, and what we're going to try next. ð¡ Innovation Awards: Annual recognition for those who move the needle with boundary-pushing projects. The upside: These awards make our commitment to innovation clear, and offer a well-deserved incentive to those who have done remarkable work. The downside: Itâs given to individuals, but we want to incentivize team work. Whatâs more, itâs not necessarily a framework for coming up with the next big thing. ð» Hackathon: This is a good framework, and lots of companies do it. Everyone (not just engineers) can take two days to collaborate on and present anything that excites them, as long as it advances our mission or addresses a key business need. The upside: Some of our biggest features grew out of hackathon projects, from the Duolingo English Test (born at our first hackathon in 2013) to our avatar builder. The downside: Other than the time/resource constraint, projects rarely align with our current priorities. The ones that take off hit the elusive combo of right time + a problem that no other team could tackle. ð¥ Special Projects: Knowing that ideal equation, we started a new program for fostering innovation, playfully dubbed DARPA (Duolingo Advanced Research Project Agency). The idea: anyone can pitch an idea at any time. If they get consensus on it and if itâs not in the purview of another team, a cross-functional group is formed to bring the project to fruition. The most creative work tends to happen when a problem is not in the clear purview of a particular team; this program creates a path for bringing these kinds of interdisciplinary ideas to life. Our Duo and Lily mascot suits (featured often on our social accounts) came from this, as did our Duo plushie and the merch store. (And if this photo doesn't show why we needed to innovate for new suits, I don't know what will!) The biggest challenge: figuring out how to transition ownership of a successful project after the strike teamâs work is done. ð Whatâs next? Weâre working on a program that proactively identifies big picture, unassigned problems that we havenât figured out yet and then incentivizes people to create proposals for solving them. How that will work is still to be determined, but we know there is a lot of fertile ground for it to take root. How does your company create an environment of creativity that encourages true innovation? I'm interested to hear what's worked for you, so please feel free to share in the comments! #duolingo #innovation #hackathon #creativity #bigideas
-
Itâs easy as a PM to only focus on the upside. But you'll notice: more experienced PMs actually spend more time on the downside. The reason is simple: the more time youâve spent in Product Management, the more times youâve been burned. The team releases âtheâ feature that was supposed to change everything for the product - and everything remains the same. When you reach this stage, product management becomes less about figuring out what new feature could deliver great value, and more about de-risking the choices you have made to deliver the needed impact. -- To do this systematically, I recommend considering Marty Cagan's classical 4 Risks. ð. ð©ð®ð¹ðð² ð¥ð¶ðð¸: ð§ðµð² ð¦ð¼ðð¹ ð¼ð³ ððµð² ð£ð¿ð¼ð±ðð°ð Remember Juicero? They built a $400 Wi-Fi-enabled juicer, only to discover that their value proposition wasnât compelling. Customers could just as easily squeeze the juice packs with their hands. A hard lesson in value risk. Value Risk asks whether customers care enough to open their wallets or devote their time. Itâs the soul of your product. If you canât be match how much they value their money or time, youâre toast. ð®. ð¨ðð®ð¯ð¶ð¹ð¶ðð ð¥ð¶ðð¸: ð§ðµð² ð¨ðð²ð¿âð ðð²ð»ð Usability Risk isn't about if customers find value; it's about whether they can even get to that value. Can they navigate your product without wanting to throw their device out the window? Google Glass failed not because of value but usability. People didnât want to wear something perceived as geeky, or that invaded privacy. Google Glass was a usability nightmare that never got its day in the sun. ð¯. ðð²ð®ðð¶ð¯ð¶ð¹ð¶ðð ð¥ð¶ðð¸: ð§ðµð² ðð¿ð ð¼ð³ ððµð² ð£ð¼ððð¶ð¯ð¹ð² Feasibility Risk takes a different angle. It's not about the market or the user; it's about you. Can you and your team actually build what youâve dreamed up? Theranos promised the moon but couldn't deliver. It claimed its technology could run extensive tests with a single drop of blood. The reality? It was scientifically impossible with their tech. They ignored feasibility risk and paid the price. ð°. ð©ð¶ð®ð¯ð¶ð¹ð¶ðð ð¥ð¶ðð¸: ð§ðµð² ð ðð¹ðð¶-ðð¶ðºð²ð»ðð¶ð¼ð»ð®ð¹ ððµð²ðð ðð®ðºð² (Business) Viability Risk is the "grandmaster" of risks. It asks: Does this product make sense within the broader context of your business? Take Kodak for example. They actually invented the digital camera but failed to adapt their business model to this disruptive technology. They held back due to fear it would cannibalize their film business. -- This systematic approach is the best way I have found to help de-risk big launches. How do you like to de-risk?
-
ð§ How To Manage Challenging Stakeholders and Influence Without Authority (free eBook, 95 pages) (https://lnkd.in/e6RY6dQB), a practical guide on how to deal with difficult stakeholders, manage difficult situations and stay true to your product strategy. From HiPPOs (Highest Paid Personâs Opinion) to ZEbRAs (Zero Evidence But Really Arrogant). By Dean Peters. Key takeaways: â Study your stakeholders as you study your users. â Attach your decisions to a goal, metric, or a problem. â Have research data ready to challenge assumptions. â Explain your tradeoffs, decisions, customer insights, data. ð« Donât hide your designs: show unfinished work early. â Explain the stage of your work and feedback you need. â For one-off requests, paint and explain the full picture. â Create a space for small experiments to limit damage. â Build trust for your process with regular key updates. ð« Donât invite feedback on design, but on your progress. As designers, we often sit on our work, waiting for the perfect moment to show the grand final outcome. Yet one of the most helpful strategies Iâve found is to give full, uncensored transparency about the work we are doing. The decision making, the frameworks we use to make these decisions, how we test, how we gather insights and make sense of them. Every couple of weeks I would either write down or record a short 3â4 mins video for stakeholders. I explain the progress weâve made over the weeks, how weâve made decisions and what our next steps will be. I show the design work done and abandoned, informed by research, refined by designers, reviewed by engineers, finetuned by marketing, approved by other colleagues. I explain the current stage of the design and what kind of feedback we would love to receive. I donât really invite early feedback on the visual appearance or flows, but I actively invite agreement on the general direction of the project â for that stakeholders. I ask if there is anything that is quite important for them, but that we might have overlooked in the process. Itâs much more difficult to argue against real data and a real established process that has led to positive outcomes over the years. In fact, stakeholders rarely know how we work. They rarely know the implications and costs of last-minute changes. They rarely see the intricate dependencies of âminor adjustmentsâ late in the process. Explain how your work ties in with their goals. Focus on the problem you are trying to solve and the value it delivers for them â not the solution you are suggesting. Support your stakeholders, and you might be surprised how quickly you might get the support that you need. Useful resources: The Delicate Art of Interviewing Stakeholders, by Dan Brown ð¤ https://lnkd.in/dW5Wb8CK Good Questions For Stakeholders, by Lisa Nguyen, Cori Widen https://lnkd.in/eNtM5bUU UX Research to Win Over Stubborn Stakeholders, by Lizzy Burnam ð https://lnkd.in/eW3Yyg5k [continues below â] #ux #design
-
Agile is just Waterfall in disguise. And itâs killing innovation The Agile Manifesto aimed to free development from processes and rigidity: ⦿ Individuals and interactions over processes and tools. ⦿ Working software over comprehensive documentation. ⦿ Responding to change over following a plan. ⦿ Customer collaboration over contract negotiation. But today, Agile has become what it tried to fix. Why? ð° Hierarchy > Autonomy Managers resisting self-organizing teams to preserve their position. ð Predictability > Experimentation Executives request predictable outcomes for shareholders. ð Certification > Mindset Certifications and frameworks don't mean competence. The hidden truth is: Companies apply an Agile paint while preserving traditional hierarchies and dynamics. They perform Agile ceremonies while abandoning core principles ð« Stand-ups become interrogations ð« Jira boards replace conversations ð« MVPs require 50-page specs ð« Collaboration means assigning tasks, not solving problems together ð« Product Owners filter user feedback that conflicts with their roadmaps ð« Teams plan entire backlogs upfront and label it "Agile" ð« Focus is not on value delivery but on sprint completion The consequences? Agile lets hierarchies hide, consultants cash in, and teams chase sprint completions. Same same but different name. And innovation dies with a Jira ticket. Here is a questions for you: Is your company performing Agile or being agile?
-
Donât make these common mistakes in techno-economic assessments (and avoid misleading conclusions.) TEA is a powerful tool to assess the feasibility of emerging technologies. But even small mistakes can lead to misleading conclusions and poor decisions. Here are 5 key mistakes Iâve seen repeatedlyâand how to fix them: 1. Overestimating Technology Performance Challenge: Assuming ideal or lab-scale performance when scaling up. Real-world conditions often bring inefficiencies. Fix: Use conservative assumptions, validate with experimental data, and conduct sensitivity analysis. 2. Ignoring Uncertainty Problem: Treating input values (e.g., costs, energy efficiency) as fixed leads to rigid, unreliable results. Fix: Perform sensitivity and scenario analyses to identify critical variables and explore best/worst cases. 3. Using Outdated or Poor-Quality Data The Problem: Relying on old data or inconsistent sources reduces the credibility of your TEA. Fix: Source data from updated literature, validated models, or credible industry benchmarks, and clearly document assumptions. If data is missing for new technologies, use proxy technologies and check uncertainties. 4. Oversimplifying Economic Analysis Problem: Focusing only on capital costs (CAPEX) while ignoring operating costs (OPEX), maintenance, or financing impacts. Or focusing on single metrics, like NPV. Fix: Include all cost componentsâCAPEX, OPEX, and life-cycle costsâand calculate key metrics like NPV, IRR, and payback period. 5. Neglecting Policy and Market Factors Problem: Ignoring factors like carbon pricing, subsidies, or fluctuating raw material costs can skew results. Fix: Integrate policy scenarios, market trends, and potential incentives to build a more realistic TEA. Techno-economic analysis is only as good as its assumptions and methods. Avoiding these mistakes will help you deliver insights that are credible, actionable, and valuable for decision-making. Weâre going to discuss all these challenges with TEA and more during my workshop in Q1 2025. What challenges have you faced when conducting TEA? Iâd love to hear your thoughts in the comments! #Research #ChemicalEngineering #Economics #Energy #PhD #Scientist #Professor