Machine Learning Algorithms

Explore top LinkedIn content from expert professionals.

  • View profile for CA Rahul

    Tax Head at Lenskart | Ex-OYO, Bytedance (TikTok), EY

    13,239 followers

    Tax Managers: If AI could review your tax reconciliations overnight, would you still call it a tool? We’re entering an era where AI doesn’t just assist - it thinks, learns, and even suggests. For tax managers juggling tight deadlines, dynamic tax regulations, and endless documentation, what if AI - could flag mismatches before you did? - draft position notes in seconds? - Interpret the latest case law and suggest action points? Is this automation? or is this your new virtual tax analyst? The lines are blurring fast. AI is no longer limited to mundane tasks - it's starting to contribute to decision-making prep. AI feels less like a tool and more like a smart colleague. But here’s the twist: AI still lacks judgment, intuition, and professional skepticism. Human review, strategic thinking, and ethical reasoning remain irreplaceable. So, what’s the future? A hybrid model, where AI is your intelligent assistant, and you remain the decision-maker. In the hands of a skilled tax professional, AI isn't just a tool - it's a force multiplier. So, is AI a tool in your belt or a team member on your bench? I'd love to hear how you're using AI in your tax function - and whether it’s a silent assistant or something much more. #TaxManager #AIinTax #TaxTech #FutureOfTax #FinanceLeaders #TaxStrategy #TaxAutomation #ArtificialIntelligence #ChatGPTinFinance #DigitalTax #CFOInsights #TechCuriosity #DecisionSupport

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    605,408 followers

    If you’re an AI engineer trying to understand how reasoning actually works inside LLMs, this will help you connect the dots. Most large language models can generate. But reasoning models can decide. Traditional LLMs followed a straight line: Input → Predict → Output. No self-checking, no branching, no exploration. Reasoning models introduced structure, a way for models to explore multiple paths, score their own reasoning, and refine their answers. We started with Chain-of-Thought (CoT) reasoning, then extended to Tree-of-Thought (ToT) for branching, and now to Graph-based reasoning, where models connect, merge, or revisit partial thoughts before concluding. This evolution changes how LLMs solve problems. Instead of guessing the next token, they learn to search the reasoning space- exploring alternatives, evaluating confidence, and adapting dynamically. Different reasoning topologies serve different goals: • Chains for simple sequential reasoning • Trees for exploring multiple hypotheses • Graphs for revising and merging partial solutions Modern architectures (like OpenAI’s o-series reasoning models, Anthropic’s Claude reasoning stack, DeepSeek R series and DeepMind’s AlphaReasoning experiments) use this idea under the hood. They don’t just generate answers, they navigate reasoning trajectories, using adaptive depth-first or breadth-first exploration, depending on task uncertainty. Why this matters? • It reduces hallucinations by verifying intermediate steps • It improves interpretability since we can visualize reasoning paths • It boosts reliability for complex tasks like planning, coding, or tool orchestration The next phase of LLM development won’t be about more parameters, it’ll be about better reasoning architectures: topologies that can branch, score, and self-correct. I’ll be doing a deep dive on reasoning models soon on my Substack- exploring architectures, training approaches, and practical applications for engineers. If you haven’t subscribed yet, make sure you do: https://lnkd.in/dpBNr6Jg ♻️ Share this with your network 🔔 Follow along for more data science & AI insights

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41,122 followers

    Researchers from Oxford University just achieved a 14% performance boost in mathematical reasoning by making LLMs work together like specialists in a company. In their new MALT (Multi-Agent LLM Training) paper, they introduced a novel approach where three specialized LLMs - a generator, verifier, and refinement model - collaborate to solve complex problems, similar to how a programmer, tester, and supervisor work together. The breakthrough lies in their training method: (1) Tree-based exploration - generating thousands of reasoning trajectories by having models interact (2) Credit attribution - identifying which model is responsible for successes or failures (3) Specialized training - using both correct and incorrect examples to train each model for its specific role Using this approach on 8B parameter models, MALT achieved relative improvements of 14% on the MATH dataset, 9% on CommonsenseQA, and 7% on GSM8K. This represents a significant step toward more efficient and capable AI systems, showing that well-coordinated smaller models can match the performance of much larger ones. Paper https://lnkd.in/g6ag9rP4 — Join thousands of world-class researchers and engineers from Google, Stanford, OpenAI, and Meta staying ahead on AI http://aitidbits.ai

  • View profile for Kuldeep Singh Sidhu

    Senior Data Scientist @ Walmart | BITS Pilani

    14,381 followers

    Exciting breakthrough in Retrieval Augmented Generation (RAG): Researchers have developed GFM-RAG, the first Graph Foundation Model for enhancing LLM knowledge retrieval. >> Key Innovations Novel Architecture: GFM-RAG introduces a query-dependent Graph Neural Network that can process complex knowledge relationships in a single step, dramatically improving both efficiency and accuracy compared to traditional multi-step approaches. Under the Hood - Constructs a knowledge graph index from documents to capture relationships between information - Uses a 6-layer query-dependent message passing neural network with 512-dimensional hidden states - Implements DistMult message functions and sum aggregation for graph processing - Pre-trains on 60 knowledge graphs containing over 14M triples and 700k documents >> Performance Highlights The system achieves state-of-the-art results across multiple datasets, outperforming existing methods by significant margins: - Up to 19.8% improvement in retrieval accuracy - 10x faster processing compared to multi-step approaches - Demonstrates strong zero-shot generalization across 7 different domains >> Impact This breakthrough by researchers from Monash University, Nanjing University of Science and Technology, and Griffith University represents a significant step forward in making LLMs more knowledgeable and efficient. The system's ability to scale and transfer across domains makes it particularly valuable for real-world applications.

  • View profile for Sarthak Rastogi

    AI engineer | Posts on agents + advanced RAG | Experienced in LLM research, ML engineering, Software Engineering

    23,109 followers

    Microsoft just released a new paper to give LLMs the skill to organise their own thinking. The paper is "Era of Agentic Organization" and the core idea is this: As LLMs get better at reasoning, our current techniques don’t scale. Sequential thinking is too slow. Parallel thinking is wasteful. Neither approach lets the model adapt mid-reasoning. So Microsoft Research built AsyncThink, a new reasoning paradigm where one model plays two roles at once: - an Organizer that decides how the reasoning should be structured - multiple Workers that solve sub-problems in parallel The interesting part: the model learns via RL how to manage its own thinking. It figures out: - when to fork into parallel sub-tasks - which branches are worth exploring - when to merge everything back together Impact: - 28% lower latency than parallel thinking - Higher accuracy on math tasks - And crazy generalization -- models trained only on simple countdown games suddenly solving Sudoku and complex math through learned asynchronous reasoning. As companies lean into AI agents across workflows, this becomes a big deal. Systems need to organize their reasoning like project managers: splitting work, coordinating processes, and adjusting on the fly. How AsyncThink works (simple breakdown): 1. Model receives a problem. 2. Organizer decides if it should fork into sub-problems. 3. Workers solve each sub-problem concurrently. 4. Organizer monitors progress and decides which branches to continue or drop. 5. Partial results get merged through a Join step. 6. Final reasoning is produced from the coordinated structure. It’s basically the first step toward “self-organizing agents” that can scale across an entire organization’s workflows. Understand the paper in 5 mins with this visual breakdown: https://lnkd.in/gJnwYMi7 If you want more breakdowns like this, I share tutorials on building and improving AI apps + agents in my newsletter 𝑨𝑰 𝑨𝒈𝒆𝒏𝒕 𝑬𝒏𝒈𝒊𝒏𝒆𝒆𝒓𝒊𝒏𝒈: https://lnkd.in/gaJTcZBR ♻️ Share it with anyone who wants to understand how next-gen agentic reasoning actually works :) Diagrams made with Miskies AI. #AI #AIAgents

  • View profile for Borys Ulanenko

    Helping transfer pricing advisors deliver 80% faster, high-precision benchmarks | Founder of ArmsLength AI

    18,404 followers

    Transfer pricing benchmarking is changing dramatically. I've watched this transformation firsthand, and the difference is striking. The Old Way: 1. TP analyst extracts a list of companies from the database 2. Manager provides vague criteria ("marketing services providers") 3. Analyst reviews companies with unclear guidance on edge cases 4. Review takes 2 full days with poorly documented rejection reasons 5. Manager spends 4 hours reviewing and finds issues 6. Analyst rushes through another review on Friday 7. Team sends to client, hoping it's acceptable (until tax authorities challenge it) Total: 4 days with questionable results Now, with AI integration: AI performs the entire review with documented sources for each comparable Detailed review criteria are clearly defined upfront Criteria are applied consistently across all potential comparables Changing strategy or criteria takes 30 minutes for hundreds of companies, not days Sources are automatically screenshotted for reference Total: half a day with superior documentation HMRC's guidance emphasizes the need for "detailed steps showing how comparables were reviewed, accepted, and rejected, including clear explanations for judgments and positions taken." Too many benchmarks fail the simple question: "Why did you reject Company X but accept Company Y?" Remember our discussion about benchmarking documentation? Tax authorities immediately spot inconsistencies in comparable selection. They can either remove your accepted comparables that break your own rules or add back rejected comparables that match your accepted ones. Both can drastically change your results. For transfer pricing advisors, this transformation means: → Less time spent on manual reviews → Better defensibility during tax audits → Consistent application of selection criteria → Clear audit trail with documented sources → Ability to quickly adjust strategies as needed The benchmark economics might be broken (as we've discussed before), but AI helps rebalance the equation by dramatically reducing the time investment while improving quality.

  • View profile for Srishtik Dutta

    SWE-2 @Google | Ex - Microsoft, Wells Fargo | ACM ICPC ’20 Regionalist | 6🌟 at Codechef | Candidate Master at Codeforces | Guardian (Top 1%) on LeetCode | Technical Content Writer ✍️| 100K+ on LinkedIn

    126,456 followers

    🚀 Contest Winning Strategies 101 - From Moore’s Voting to Majority-in-Range Queries 🚀 Most FAANG interviews will test you on the classic Majority Element problem—find the element that appears > n/2 times in a static array. Moore’s Voting Algorithm nails it in O(n) time and O(1) space: 1. Candidate selection (one pass) 2. Verification (second pass) Every standard sheet in the market handles that - so let me ask you the next question "What’s the majority in A[L…R]?”—and you have up to 10⁵ queries on an array of size 10⁵? A brute force ≃ O(n) per query is a non-starter. Here’s how to level up: ⚙️ 1. Store a “voting‐pair” per segment Build a Segment Tree where each node over a subarray [l..r] keeps: cand = the Moore candidate for that subarray cnt = its “net vote” (votes for minus votes against) Merging two children is just another round of voting: if left.cand == right.cand: merged.cand = left.cand merged.cnt = left.cnt + right.cnt else if left.cnt > right.cnt: merged.cand = left.cand merged.cnt = left.cnt - right.cnt else: merged.cand = right.cand merged.cnt = right.cnt - left.cnt That gives you O(log N) per query to retrieve a single candidate for [L..R]. 🔍 2. Verify with prefix‐lists Moore’s trick only selects a candidate; you still need to confirm it really occurs > ⌊(R–L+1)/2⌋ times. Precompute pos[val] = sorted list of all indices where A[i]==val. Then in O(log N) you can count occurrences in [L..R] by two binary searches. 📈 3. Complexity Build: O(N log N) (segment tree + map positions) Each query: O(log N) to get cand + O(log N) to verify → O(log N) total This scales to 10⁵ queries in under a second, turning a one-pass offline trick into a real-time range service. 💡 Key takeaway: Many “linear-time” interview algorithms can be lifted to range-query or dynamic settings by augmenting data structures (segment trees, BITs, etc.) with just a bit of extra info. Taking the understanding of standard problems to the next level, is how you level your DSA game up. Stay tuned for more such content!! ✌🏻🚀

  • View profile for Marc Theermann

    Chief Strategy Officer at Boston Dynamics (Building the world's most capable mobile #robots and Embodied AI

    59,035 followers

    Another robotics masterpiece from our friends from Disney Research! Recent progress in physics-based character control has improved learning from unstructured motion data, but it's still hard to create a single control policy that handles diverse, unseen motions and works on real robots. To solve this, the team at Disney proposes a new two-stage technique. In the first stage, an autoencoder is used to learn a latent space encoding from short motion clips. In the second stage, this encoding helps train a policy that maps kinematic input to dynamic output, ensuring accurate and adaptable movements. By keeping these stages separate, the method benefits from better motion encoding and avoids common issues like mode collapse. This technique has shown to be effective in simulations and has successfully brought dynamic motions to a real bipedal robot, marking an important step forward in robot control. You can find the full paper here: https://lnkd.in/d-kzexdJ What Markus Gross, Moritz Baecher and the rest of the gang are bringing to life is unbelievable!

  • View profile for Aleksandra Bal

    Indirect Tax Technology | BSc Computer Science, PhD Tax Policy & MBA | Sales Tax & VAT Solutions Across 100+ Countries | Building Global Tax Infrastructure @ Stripe

    9,451 followers

    🚨 New Report Release: 𝐎𝐄𝐂𝐃'𝐬 2025 𝐓𝐚𝐱 𝐀𝐝𝐦𝐢𝐧𝐢𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐃𝐢𝐠𝐢𝐭𝐚𝐥𝐢𝐬𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐈𝐧𝐢𝐭𝐢𝐚𝐭𝐢𝐯𝐞𝐬 🚨 Today the OECD has published the latest edition of its Tax Administration Digitalisation and Digital Transformation Initiatives – and the results underscore just how quickly the digital tax landscape is evolving globally. Key takeaways from the 54 OECD FTA member administrations: ✅ 𝐃𝐚𝐭𝐚 𝐃𝐢𝐫𝐞𝐜𝐭 𝐟𝐫𝐨𝐦 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐒𝐲𝐬𝐭𝐞𝐦𝐬: Around 80% of tax administrations now receive data directly from taxpayers’ business systems — and in some cases, entirely machine-to-machine, without any human intervention. 🧾 𝐏𝐫𝐞𝐟𝐢𝐥𝐥𝐞𝐝 𝐕𝐀𝐓 𝐑𝐞𝐭𝐮𝐫𝐧𝐬: Thanks to technologies like electronic invoicing systems, nearly 40% of administrations can now prefill VAT returns, simplifying processes for businesses and improving accuracy. 🤖 𝐀𝐈 𝐢𝐧 𝐀𝐜𝐭𝐢𝐨𝐧: Over 70% of tax administrations are using AI — especially in fraud detection, risk assessment, and through virtual assistants to improve taxpayer services and boost compliance. 📘 𝐌𝐚𝐜𝐡𝐢𝐧𝐞-𝐑𝐞𝐚𝐝𝐚𝐛𝐥𝐞 𝐋𝐚𝐰: Nearly 30% of administrations now publish all tax legislation in machine-readable formats, making it easier for systems (and developers) to integrate tax logic directly into business applications. 🔐 𝐓𝐫𝐮𝐬𝐭𝐞𝐝 𝐓𝐞𝐜𝐡 & 𝐓𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲: A third of administrations are publishing lists of approved software products, increasing trust in the tools used for tax compliance and enabling a more secure, transparent ecosystem. This report is a clear indicator: tax compliance is becoming more embedded, automated, and intelligent. 📥 Read the full report here: https://lnkd.in/euc4BJSV

  • View profile for Aishwarya Naresh Reganti

    Founder & CEO @ LevelUp Labs | Ex-AWS | Consulting, Training & Investing in AI

    117,400 followers

    🤔 Vanilla-RAG struggles with structured knowledge sources like knowledge graphs. GNN-RAG is a very neat idea to fix this! ⛳ Vanilla-RAG struggles with structured inputs like KGs because it relies heavily on LLMs for retrieval, which are not adept at handling the complex graph information inherent in KGs. This leads to suboptimal performance, especially on multi-hop and multi-entity questions that require traversing multiple relationships in the graph. ⛳ GNN-RAG integrates the strengths of both LLMs and and Graph Neural Networks (GNNs) to solve this issue: 💡 GNN: Excels at processing and reasoning over graph structures. It reasons over a dense KG subgraph to retrieve answer candidates for a given question. 💡LLM: Leverages its natural language processing abilities to further reason over the information provided by the GNN. 👉 Here's the workflow: 🔺 A GNN processes the KG to identify and retrieve candidate answers. 🔺The shortest paths connecting question entities to answer candidates in the KG are extracted to represent reasoning paths. 🔺These paths are verbalized and provided as input to the LLM for final reasoning and answer generation. GNN-RAG achieves state-of-the-art results on two widely used KGQA benchmarks, WebQSP and ComplexWebQuestions (CWQ) and outperforms existing methods, including GPT-4, particularly on multi-hop and multi-entity questions. Link to the paper: https://lnkd.in/euC7N85K

Explore categories