Tax Managers: If AI could review your tax reconciliations overnight, would you still call it a tool? Weâre entering an era where AI doesnât just assist - it thinks, learns, and even suggests. For tax managers juggling tight deadlines, dynamic tax regulations, and endless documentation, what if AI - could flag mismatches before you did? - draft position notes in seconds? - Interpret the latest case law and suggest action points? Is this automation? or is this your new virtual tax analyst? The lines are blurring fast. AI is no longer limited to mundane tasks - it's starting to contribute to decision-making prep. AI feels less like a tool and more like a smart colleague. But hereâs the twist: AI still lacks judgment, intuition, and professional skepticism. Human review, strategic thinking, and ethical reasoning remain irreplaceable. So, whatâs the future? A hybrid model, where AI is your intelligent assistant, and you remain the decision-maker. In the hands of a skilled tax professional, AI isn't just a tool - it's a force multiplier. So, is AI a tool in your belt or a team member on your bench? I'd love to hear how you're using AI in your tax function - and whether itâs a silent assistant or something much more. #TaxManager #AIinTax #TaxTech #FutureOfTax #FinanceLeaders #TaxStrategy #TaxAutomation #ArtificialIntelligence #ChatGPTinFinance #DigitalTax #CFOInsights #TechCuriosity #DecisionSupport
Machine Learning Algorithms
Explore top LinkedIn content from expert professionals.
-
-
If youâre an AI engineer trying to understand how reasoning actually works inside LLMs, this will help you connect the dots. Most large language models can generate. But reasoning models can decide. Traditional LLMs followed a straight line: Input â Predict â Output. No self-checking, no branching, no exploration. Reasoning models introduced structure, a way for models to explore multiple paths, score their own reasoning, and refine their answers. We started with Chain-of-Thought (CoT) reasoning, then extended to Tree-of-Thought (ToT) for branching, and now to Graph-based reasoning, where models connect, merge, or revisit partial thoughts before concluding. This evolution changes how LLMs solve problems. Instead of guessing the next token, they learn to search the reasoning space- exploring alternatives, evaluating confidence, and adapting dynamically. Different reasoning topologies serve different goals: ⢠Chains for simple sequential reasoning ⢠Trees for exploring multiple hypotheses ⢠Graphs for revising and merging partial solutions Modern architectures (like OpenAIâs o-series reasoning models, Anthropicâs Claude reasoning stack, DeepSeek R series and DeepMindâs AlphaReasoning experiments) use this idea under the hood. They donât just generate answers, they navigate reasoning trajectories, using adaptive depth-first or breadth-first exploration, depending on task uncertainty. Why this matters? ⢠It reduces hallucinations by verifying intermediate steps ⢠It improves interpretability since we can visualize reasoning paths ⢠It boosts reliability for complex tasks like planning, coding, or tool orchestration The next phase of LLM development wonât be about more parameters, itâll be about better reasoning architectures: topologies that can branch, score, and self-correct. Iâll be doing a deep dive on reasoning models soon on my Substack- exploring architectures, training approaches, and practical applications for engineers. If you havenât subscribed yet, make sure you do: https://lnkd.in/dpBNr6Jg â»ï¸ Share this with your network ð Follow along for more data science & AI insights
-
Researchers from Oxford University just achieved a 14% performance boost in mathematical reasoning by making LLMs work together like specialists in a company. In their new MALT (Multi-Agent LLM Training) paper, they introduced a novel approach where three specialized LLMs - a generator, verifier, and refinement model - collaborate to solve complex problems, similar to how a programmer, tester, and supervisor work together. The breakthrough lies in their training method: (1) Tree-based exploration - generating thousands of reasoning trajectories by having models interact (2) Credit attribution - identifying which model is responsible for successes or failures (3) Specialized training - using both correct and incorrect examples to train each model for its specific role Using this approach on 8B parameter models, MALT achieved relative improvements of 14% on the MATH dataset, 9% on CommonsenseQA, and 7% on GSM8K. This represents a significant step toward more efficient and capable AI systems, showing that well-coordinated smaller models can match the performance of much larger ones. Paper https://lnkd.in/g6ag9rP4 â Join thousands of world-class researchers and engineers from Google, Stanford, OpenAI, and Meta staying ahead on AI http://aitidbits.ai
-
Exciting breakthrough in Retrieval Augmented Generation (RAG): Researchers have developed GFM-RAG, the first Graph Foundation Model for enhancing LLM knowledge retrieval. >> Key Innovations Novel Architecture: GFM-RAG introduces a query-dependent Graph Neural Network that can process complex knowledge relationships in a single step, dramatically improving both efficiency and accuracy compared to traditional multi-step approaches. Under the Hood - Constructs a knowledge graph index from documents to capture relationships between information - Uses a 6-layer query-dependent message passing neural network with 512-dimensional hidden states - Implements DistMult message functions and sum aggregation for graph processing - Pre-trains on 60 knowledge graphs containing over 14M triples and 700k documents >> Performance Highlights The system achieves state-of-the-art results across multiple datasets, outperforming existing methods by significant margins: - Up to 19.8% improvement in retrieval accuracy - 10x faster processing compared to multi-step approaches - Demonstrates strong zero-shot generalization across 7 different domains >> Impact This breakthrough by researchers from Monash University, Nanjing University of Science and Technology, and Griffith University represents a significant step forward in making LLMs more knowledgeable and efficient. The system's ability to scale and transfer across domains makes it particularly valuable for real-world applications.
-
Microsoft just released a new paper to give LLMs the skill to organise their own thinking. The paper is "Era of Agentic Organization" and the core idea is this: As LLMs get better at reasoning, our current techniques donât scale. Sequential thinking is too slow. Parallel thinking is wasteful. Neither approach lets the model adapt mid-reasoning. So Microsoft Research built AsyncThink, a new reasoning paradigm where one model plays two roles at once: - an Organizer that decides how the reasoning should be structured - multiple Workers that solve sub-problems in parallel The interesting part: the model learns via RL how to manage its own thinking. It figures out: - when to fork into parallel sub-tasks - which branches are worth exploring - when to merge everything back together Impact: - 28% lower latency than parallel thinking - Higher accuracy on math tasks - And crazy generalization -- models trained only on simple countdown games suddenly solving Sudoku and complex math through learned asynchronous reasoning. As companies lean into AI agents across workflows, this becomes a big deal. Systems need to organize their reasoning like project managers: splitting work, coordinating processes, and adjusting on the fly. How AsyncThink works (simple breakdown): 1. Model receives a problem. 2. Organizer decides if it should fork into sub-problems. 3. Workers solve each sub-problem concurrently. 4. Organizer monitors progress and decides which branches to continue or drop. 5. Partial results get merged through a Join step. 6. Final reasoning is produced from the coordinated structure. Itâs basically the first step toward âself-organizing agentsâ that can scale across an entire organizationâs workflows. Understand the paper in 5 mins with this visual breakdown: https://lnkd.in/gJnwYMi7 If you want more breakdowns like this, I share tutorials on building and improving AI apps + agents in my newsletter ð¨ð° ð¨ðððð ð¬ðððððððððð: https://lnkd.in/gaJTcZBR â»ï¸ Share it with anyone who wants to understand how next-gen agentic reasoning actually works :) Diagrams made with Miskies AI. #AI #AIAgents
-
Transfer pricing benchmarking is changing dramatically. I've watched this transformation firsthand, and the difference is striking. The Old Way: 1. TP analyst extracts a list of companies from the database 2. Manager provides vague criteria ("marketing services providers") 3. Analyst reviews companies with unclear guidance on edge cases 4. Review takes 2 full days with poorly documented rejection reasons 5. Manager spends 4 hours reviewing and finds issues 6. Analyst rushes through another review on Friday 7. Team sends to client, hoping it's acceptable (until tax authorities challenge it) Total: 4 days with questionable results Now, with AI integration: AI performs the entire review with documented sources for each comparable Detailed review criteria are clearly defined upfront Criteria are applied consistently across all potential comparables Changing strategy or criteria takes 30 minutes for hundreds of companies, not days Sources are automatically screenshotted for reference Total: half a day with superior documentation HMRC's guidance emphasizes the need for "detailed steps showing how comparables were reviewed, accepted, and rejected, including clear explanations for judgments and positions taken." Too many benchmarks fail the simple question: "Why did you reject Company X but accept Company Y?" Remember our discussion about benchmarking documentation? Tax authorities immediately spot inconsistencies in comparable selection. They can either remove your accepted comparables that break your own rules or add back rejected comparables that match your accepted ones. Both can drastically change your results. For transfer pricing advisors, this transformation means: â Less time spent on manual reviews â Better defensibility during tax audits â Consistent application of selection criteria â Clear audit trail with documented sources â Ability to quickly adjust strategies as needed The benchmark economics might be broken (as we've discussed before), but AI helps rebalance the equation by dramatically reducing the time investment while improving quality.
-
ð Contest Winning Strategies 101 - From Mooreâs Voting to Majority-in-Range Queries ð Most FAANG interviews will test you on the classic Majority Element problemâfind the element that appears > n/2 times in a static array. Mooreâs Voting Algorithm nails it in O(n) time and O(1) space: 1. Candidate selection (one pass) 2. Verification (second pass) Every standard sheet in the market handles that - so let me ask you the next question "Whatâs the majority in A[Lâ¦R]?ââand you have up to 10âµ queries on an array of size 10âµ? A brute force â O(n) per query is a non-starter. Hereâs how to level up: âï¸ 1. Store a âvotingâpairâ per segment Build a Segment Tree where each node over a subarray [l..r] keeps: cand = the Moore candidate for that subarray cnt = its ânet voteâ (votes for minus votes against) Merging two children is just another round of voting: if left.cand == right.cand: merged.cand = left.cand merged.cnt = left.cnt + right.cnt else if left.cnt > right.cnt: merged.cand = left.cand merged.cnt = left.cnt - right.cnt else: merged.cand = right.cand merged.cnt = right.cnt - left.cnt That gives you O(log N) per query to retrieve a single candidate for [L..R]. ð 2. Verify with prefixâlists Mooreâs trick only selects a candidate; you still need to confirm it really occurs > â(RâL+1)/2â times. Precompute pos[val] = sorted list of all indices where A[i]==val. Then in O(log N) you can count occurrences in [L..R] by two binary searches. ð 3. Complexity Build: O(N log N) (segment tree + map positions) Each query: O(log N) to get cand + O(log N) to verify â O(log N) total This scales to 10âµ queries in under a second, turning a one-pass offline trick into a real-time range service. ð¡ Key takeaway: Many âlinear-timeâ interview algorithms can be lifted to range-query or dynamic settings by augmenting data structures (segment trees, BITs, etc.) with just a bit of extra info. Taking the understanding of standard problems to the next level, is how you level your DSA game up. Stay tuned for more such content!! âð»ð
-
Another robotics masterpiece from our friends from Disney Research! Recent progress in physics-based character control has improved learning from unstructured motion data, but it's still hard to create a single control policy that handles diverse, unseen motions and works on real robots. To solve this, the team at Disney proposes a new two-stage technique. In the first stage, an autoencoder is used to learn a latent space encoding from short motion clips. In the second stage, this encoding helps train a policy that maps kinematic input to dynamic output, ensuring accurate and adaptable movements. By keeping these stages separate, the method benefits from better motion encoding and avoids common issues like mode collapse. This technique has shown to be effective in simulations and has successfully brought dynamic motions to a real bipedal robot, marking an important step forward in robot control. You can find the full paper here: https://lnkd.in/d-kzexdJ What Markus Gross, Moritz Baecher and the rest of the gang are bringing to life is unbelievable!
-
ð¨ New Report Release: ðððð'ð¬ 2025 ððð± ððð¦ð¢ð§ð¢ð¬ðð«ððð¢ð¨ð§ ðð¢ð ð¢ððð¥ð¢ð¬ððð¢ð¨ð§ ðð§ð ðð¢ð ð¢ððð¥ ðð«ðð§ð¬ðð¨ð«ð¦ððð¢ð¨ð§ ðð§ð¢ðð¢ððð¢ð¯ðð¬ ð¨ Today the OECD has published the latest edition of its Tax Administration Digitalisation and Digital Transformation Initiatives â and the results underscore just how quickly the digital tax landscape is evolving globally. Key takeaways from the 54 OECD FTA member administrations: â ðððð ðð¢ð«ððð ðð«ð¨ð¦ ðð®ð¬ð¢ð§ðð¬ð¬ ðð²ð¬ððð¦ð¬: Around 80% of tax administrations now receive data directly from taxpayersâ business systems â and in some cases, entirely machine-to-machine, without any human intervention. ð§¾ ðð«ððð¢ð¥ð¥ðð ððð ðððð®ð«ð§ð¬: Thanks to technologies like electronic invoicing systems, nearly 40% of administrations can now prefill VAT returns, simplifying processes for businesses and improving accuracy. ð¤ ðð ð¢ð§ ðððð¢ð¨ð§: Over 70% of tax administrations are using AI â especially in fraud detection, risk assessment, and through virtual assistants to improve taxpayer services and boost compliance. ð ðððð¡ð¢ð§ð-ððððððð¥ð ððð°: Nearly 30% of administrations now publish all tax legislation in machine-readable formats, making it easier for systems (and developers) to integrate tax logic directly into business applications. ð ðð«ð®ð¬ððð ðððð¡ & ðð«ðð§ð¬ð©ðð«ðð§ðð²: A third of administrations are publishing lists of approved software products, increasing trust in the tools used for tax compliance and enabling a more secure, transparent ecosystem. This report is a clear indicator: tax compliance is becoming more embedded, automated, and intelligent. ð¥ Read the full report here: https://lnkd.in/euc4BJSV
-
ð¤ Vanilla-RAG struggles with structured knowledge sources like knowledge graphs. GNN-RAG is a very neat idea to fix this! â³ Vanilla-RAG struggles with structured inputs like KGs because it relies heavily on LLMs for retrieval, which are not adept at handling the complex graph information inherent in KGs. This leads to suboptimal performance, especially on multi-hop and multi-entity questions that require traversing multiple relationships in the graph. â³ GNN-RAG integrates the strengths of both LLMs and and Graph Neural Networks (GNNs) to solve this issue: ð¡ GNN: Excels at processing and reasoning over graph structures. It reasons over a dense KG subgraph to retrieve answer candidates for a given question. ð¡LLM: Leverages its natural language processing abilities to further reason over the information provided by the GNN. ð Here's the workflow: ðº A GNN processes the KG to identify and retrieve candidate answers. ðºThe shortest paths connecting question entities to answer candidates in the KG are extracted to represent reasoning paths. ðºThese paths are verbalized and provided as input to the LLM for final reasoning and answer generation. GNN-RAG achieves state-of-the-art results on two widely used KGQA benchmarks, WebQSP and ComplexWebQuestions (CWQ) and outperforms existing methods, including GPT-4, particularly on multi-hop and multi-entity questions. Link to the paper: https://lnkd.in/euC7N85K