Tech Policy Advocacy

Explore top LinkedIn content from expert professionals.

  • View profile for Daniel Bamidele

    Emerging AI Governance, Safety & Compliance Leader | Managed $1M+ Programmes Across 3 Continents |Information & Data Protection| | Lead Contributor, Tokens & Tangents Newsletter. Building digitaltrustarchive.org

    7,148 followers

    A Canadian government department wanted to use AI to process visa applications faster. Before they could deploy, they had to complete an Algorithmic Impact Assessment. Question 15: "Could this system's decisions affect someone's legal rights?" Yes. Question 23: "Will decisions be automatically made without human review?" Partially. Question 31: "Does the system use machine learning trained on historical data?" Yes. Final score: Level 3 (High Impact) Requirements triggered: → Explainability for every decision → Human review for all rejections → Quarterly bias testing → Public audit trail The department couldn't deploy until these were in place. Six months later: The system processed applications 40% faster. But monitoring revealed something interesting: Applications from certain countries were flagged for review at 3x the rate predicted. Because the assessment was public, a researcher noticed this gap. Investigation revealed the AI learned patterns from old data when those countries had different visa requirements. System was retrained. Assessment was updated. Public report explained what was learned. This is what good governance looks like: Not rules preventing deployment. Not audits finding problems later. But transparency creating continuous learning. The Canadian approach proves something crucial: You don't need complex regulations. You need organizations to commit publicly to their AI's impact, then govern the gap between promise and reality. Simple. Transparent. Effective. Why isn't everyone doing this? #AIRegulation #AIPolicy #DigitalGovernance #TechPolicy #RegulatoryCompliance

  • View profile for Vlada Bortnik

    CEO & Co-founder, Marco Polo | Helping millions feel close | Writing & speaking on inspired capitalism, conscious leadership, and the inner journey towards *being* enough

    8,555 followers

    6% of global revenue. That's the fine Denmark will impose on social platforms that fail to keep kids under 15 off their sites. Right now, 94% of Danish kids under 13 have social media profiles. More than half of kids under 10 do, too — despite platforms' own age restrictions. Denmark's Digital Minister Caroline Stage Olsen put it bluntly: "The amount of violence, self-harm that they are exposed to online is simply too great a risk for our children." For years, platforms followed a familiar playbook: maximize time-on-site, optimize for engagement, ignore the consequences. And when regulators push back? Cry complexity. But complexity isn't the issue. Willingness is. This is bigger than Denmark. Australia's age restriction law kicks in next month. France and the Netherlands are advancing similar measures. The trend is clear: governments are finally done waiting for self-regulation that hasn't happened yet. Why the 6% fine matters: When the cost of inaction exceeds the profit from delay, innovation becomes inevitable. Companies like Marco Polo, Sage Haven, and VSCO® are proving you can build with wellbeing at the core and still thrive. What does that look like? - Focus on connection and value, not time spent. - Friction where it's needed. - Fewer (or none) algorithmic accelerants. - Transparent research on mental health impacts. Stage said it plainly: "We've given the tech giants so many chances to stand up and do something about what is happening on their platforms. They haven't done it." Now the question isn't whether reform is coming — it's whether platforms will shape it or resist it country by country, year by year. #ConsciousLeadership #TechEthics #DigitalResponsibility **** Update: Just to be clear they have alignment on the ban but from the article: "Stage said a ban won’t take effect immediately. Allied lawmakers on the issue from across the political spectrum who make up a majority in parliament will likely take months to pass relevant legislation. “I can assure you that Denmark will hurry, but we won’t do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through,” Stage said. Her ministry said pressure from tech giants’ business models was “too massive.”"

  • View profile for Rob T. Lee

    Chief AI Officer (CAIO), Chief of Research, SANS Institute | “Godfather of Digital Forensics” | Executive Leader | Al Strategist | Advising C-Suite Leaders on Secure Al Transformation | Technical Advisor to US Govt

    22,991 followers

    The White House dropped its National Cyber Strategy yesterday and I’ve read it three times, it’s short but it hits the right points. (That’s either diligence or a sign I need more sleep.) My honest take: more right than wrong, with one gap worth watching. What they got right is significant: Pillar 5 explicitly calls for “rapidly adopt and promote agentic AI in ways that securely scale network defense and disruption.” That sentence alone represents a meaningful shift in federal posture. For years, the policy conversation treated AI as something to study carefully. This document treats it as something to deploy. That’s the correct urgency. The GTG-1002 operation – Chinese state actors running Claude Code at 80-90% autonomy for offensive reconnaissance – happened in 2025. Our defensive posture can’t be operating on a different clock. Pillar 2 on common sense regulation: “Streamline cyber regulations to reduce compliance burdens” and “ensure that the private sector has the agility necessary to keep pace with rapidly evolving threats” I’ve been making exactly this argument, framed as a cybersecurity safe harbor. Defenders need the equivalent of HIPAA exceptions that doctors received: the ability to analyze sensitive data strictly for threat detection without running afoul of the same privacy laws our adversaries ignore. (Attackers don’t schedule GDPR reviews.) The regulatory direction here is right. The offensive posture throughout the document also reflects a reality check that previous strategies often softened. Shaping adversary behavior, using all instruments of national power, not confining responses to the cyber realm – that’s accurate threat modeling. The asymmetry is real, and this strategy names it. The place I’d love to see version two develop further is the workforce pillar. Pillar 6 describes pipelines, academia, vocational schools, existing credentialing pathways. What it doesn’t yet address is the continuous education model: the delta between a certification earned three years ago and the threat landscape operators face today. AI-augmented attacks are evolving monthly. (I sit on the CSIS Commission on U.S. Cyber Force Generation and this is exactly the conversation we keep coming back to.) The document builds the right foundation. Workforce readiness is an ongoing operating expense, not a one-time investment. That’s less a criticism than a prompt for the implementation documents that follow. Strategies set direction. The workforce question is where direction meets execution – and getting that right is what makes the other five pillars actually work. Read it. The AI deployment posture, regulatory flexibility, and offensive framing are worth your time. The workforce section is where practitioners like you and me have the most to contribute to what comes next.

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (30k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    30,765 followers

    𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐚𝐰𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧𝐀𝐈 𝐀𝐩𝐩𝐬 Building GenAI Apps for a Global Audience?  Understanding Regional Data Protection and AI laws is not optional, it is foundational. Here is what you need to know: 1. UNDERSTANDING GLOBAL REGULATORY VARIANCE Building GenAI for a global audience requires understanding regional data protection and AI laws. Key Regulations by Region: • EU AI Act: Risk-based AI obligations for certain AI systems and transparency use cases • GDPR (EU): Transparency & Consent • DPDP (India): Digital Personal Data Protection • PIPL (China): Strict Data Localization • CCPA (California): Data Access & Opt-Out • LGPD (Brazil): Local Compliance Rules 2. IMPACT OF THESE REGULATIONS ON YOUR AI TRAINING DATA To build compliant GenAI apps,  Ensure that data used for training AI models follows the regional rules: Data Collection → Processing → Model Training → Deployment Three Core Requirements: a. User Consent: Obtain explicit consent for data collection and use b. Data Minimization: Collect only necessary data for the intended purpose c. Anonymization: Remove personally identifiable information from training data 3. MITIGATING AI ETHICS AND BIAS RISKS AI systems must be fair and ethical, particularly in high-risk areas: a. Fairness: Ensure your AI models don't discriminate, especially in areas like recruitment or finance. b. Bias Mitigation: Regularly test and adjust your models to reduce bias in the outputs. 4. ENSURING TRANSPARENCY IN AI MODEL DEVELOPMENT Transparency is a cornerstone of compliance, especially when your AI impacts users directly: a. Explainability: Protect data in transit and at rest. b. Consent Management: Collect, track, and manage user consent. c. Privacy by Design: Embed privacy into every system layer. 5. MANAGING CROSS-BORDER DATA FLOW GenAI apps often rely on data from various regions, so it's critical to understand data sovereignty laws: a. Data Sovereignty: Follow local laws on where data is stored and processed. b. Data Transfer Agreements: Use SCCs or BCRs for compliant cross-border transfers. THE COMPLIANCE CHECKLIST Before launching GenAI globally, verify: 1. Regional Compliance: • GDPR for EU? (Transparency & Consent) • DPDP for India? (Data Protection) • PIPL for China? (Data Localization) • CCPA for California? (Access & Opt-Out) • LGPD for Brazil? (Local Rules) 2. Training Data: • User consent obtained? • Data minimized? • PII anonymized? 3. Ethics & Bias: • Fairness tested? • Bias mitigation in place? 4. Transparency: • Explainability documented? • Consent management system? • Privacy by design? 5. Cross-Border: • Data sovereignty compliance? • Transfer agreements (SCCs/BCRs)? Each region has different requirements.  Build for the strictest, adapt for the rest. Which regulation applies to your GenAI app?

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,033 followers

    "Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI-generated and authentic content." By David Evan Harris and Aaron Shull of the Centre for International Governance Innovation (CIGI).

  • View profile for Felix M. Simon
    Felix M. Simon Felix M. Simon is an Influencer

    Research Fellow in AI, Information and News, Reuters Institute & DPIR, University of Oxford | Research Associate, Oxford Internet Institute | Junior Research Fellow in Politics, Corpus Christi College

    7,547 followers

    ✨New working paper on the trade-offs involved in AI transparency in news 🤖📝 How does a global news organisation disclose its use of AI? Where, when and how should readers be told when algorithms shape the news they consume? Based on a case study of the Financial Times and led by Liz Lohn we argue that transparency about AI in news is best understood as a spectrum, evolving with tech advancements, commercial, professional and ethical considerations and shifting audience attitudes. 🔗Pre-print: https://lnkd.in/gV3dPXgS 1️⃣ AI‑transparency ≠ a binary. At the FT it’s a hybrid of policy, process and practice. Senior leadership sets explicit principles, cross‑functional panels vet new applications, and AI use is signposted in internal/external tools and reinforced through training. 2️⃣ Disclosure is calibrated to context. Internally, full disclosure aims to reduce frictions and surfaces errors early; externally, labels are scaled with autonomy and oversight. No‑human‑in‑the‑loop features (e.g. Ask FT) get prominent warnings, whereas AI‑assisted, journalist‑edited outputs (e.g. bullet‑point summaries) get lighter labelling. 3️⃣ Nine factors shape what, when & how the FT discloses AI use. These include legal/provider requirements, industry benchmarking, the degree of human oversight, the nature of the task, system novelty, audience expectations & research, perceived risk, commercial sensitivities and design constraints. 4️⃣ Persistent challenges include achieving consistent labelling (especially on mobile), breaking organisational silos, keeping pace with evolving models and norms, guarding against creeping human over‑reliance, and mitigating against “transparency backfire” where disclosures reduce trust. For those of you more academically interested in this, we argue that AI transparency at the FT is shaped by isomorphic pressures – regulations, peer practices and audience expectations – and by intersecting institutional logics. Internally, managerial and commercial logics push for efficient adoption and risk management; externally, professional journalism ethics and commercial imperatives drive an aim to remain trustworthy. Crucially, we argue that AI transparency is best seen as a spectrum: optimising one factor (e.g. maximum disclosure) can undermine others (e.g. perceived trust or revenue). There does not seem to be a one‑size‑fits‑all rule; instead transparency must adapt to org context, audiences and technology. We are very grateful to the team at the Financial Times, particularly Matthew Garrahan, for supporting this study from the outset – and to the participants from the FT who volunteered their precious time to help us in understanding this issue. Feedback welcome, especially on the theoretical section and the discussion as well as literature that we will have missed! So feel free to plug your own or other people’s material, all of which will be appreciated as Liz and I work towards a journal submission.

  • View profile for Monica Jasuja
    Monica Jasuja Monica Jasuja is an Influencer

    Where Payments, Policy and AI Meet | LinkedIn Top Voice | Global Keynote Speaker | Board Advisor | PayPal, Mastercard, Gojek Alum

    84,592 followers

    Stablecoins are being regulated less like crypto and more like national payment infrastructure. 1:1 backing is the minimum. Resilience above 100% is the real requirement. That single design shift quietly rewrites the economics of digital money. Source: Global Stablecoin Regulatory Playbook (Jan 2026), Global Digital Finance. This isn’t a crypto manifesto. It’s a regulatory blueprint that treats stablecoins as payment infrastructure, not speculative assets. ↳ Stats that demand attention • 1:1 reserve backing is now non-negotiable across major regimes (US GENIUS Act, EU MiCA, UK proposals, SG, UAE) • 0% yield tolerance for payment stablecoins → any yield feature risks reclassification as a security • Monthly reserve attestations mandated in the US and Singapore vs quarterly or annual elsewhere • ≤ 5 business days is the global norm for guaranteed par redemption, not T+0 • Short-dated government debt (≤ 6–12 months) preferred over bank deposits for systemic issuers • Systemic designation thresholds hinge on transaction volume, not market cap or user count Counterintuitive takeaway: Faster redemption ≠ safer stablecoin. Asset quality beats redemption speed. ↳ Three insights reshaping the industry 1) Market structure is consolidating by design • Compliance costs scale non-linearly → favors large, multi-jurisdiction issuers • Smaller issuers survive only via niche corridors or local use cases • Stablecoins are drifting closer to regulated payment rails, not open crypto markets 2) The real blind spot is cross-border equivalence • Most jurisdictions still lack clear reciprocity rules • Local-issuance mandates fracture fungibility, raising costs for global payments • Regulatory trust, not technology, is the binding constraint 3) Stablecoins are becoming macro-economic instruments • Reserves channel demand into sovereign debt and central bank deposits • Regulation is now monetary policy by another name • Payment stablecoins are explicitly designed to not compete with savings products ↳ My perspective • Trust scales before innovation: payments only grow when regulators are confident in failure modes • Uniform outcomes matter more than uniform rules: functional equivalence beats legal symmetry • The business model is shifting: stablecoins are utilities, not growth hacks The open question isn’t whether stablecoins will be regulated. It’s who absorbs the cost of safety: issuers, users, or the broader financial system. Which future do you think wins? A) A few global stablecoin utilities embedded into payment stacks B) Region-specific stablecoins aligned to local monetary policy C) Bank-issued tokenised deposits overtaking stablecoins

  • View profile for Doug Taylor
    Doug Taylor Doug Taylor is an Influencer

    Chief Executive Officer, Board Member and Adjunct Professor. Social Impact- Leadership, Governance & Education.

    9,782 followers

    The promise of AI is huge; it could help close the learning gap for students who fall behind because of poverty. But as its innovation surges ahead, the divide between those who can access it and use it effectively – and those who can’t – grows every day. Imagine being a student without a laptop or reliable internet. How do you benefit from AI when you can’t even get online? At The Smith Family, we see firsthand how digital exclusion holds students back from making the most of their education – 44% of students on our Learning for Life program lack a device, internet access or both at home. In this opinion piece for The Mandarin, I explore what this means for students experiencing disadvantage – for their education, their skills and their future careers – and outline four steps we must take to ensure AI and digital technology expand opportunity for everyone, not just a few. Inequity isn’t inevitable, but fairness and inclusion don’t simply happen. They require action. Not in 10 years. Not next year. But right now. You can read the full piece here: https://bit.ly/48ev3RS

  • View profile for Melanie Nakagawa
    Melanie Nakagawa Melanie Nakagawa is an Influencer

    Chief Sustainability Officer @ Microsoft | Combining technology, business, and policy for change

    108,910 followers

    For millions in Northeastern Brazil, a lack of internet access isn't just a technical issue, it's a barrier to education and jobs in rural areas. But in working to overcome this barrier, we are also finding new opportunities to scale our community engagement. I recently met with the team at Brisanet Telecomunicações, this region's largest fiber-optic provider. Microsoft has been partnering with them since November 2022. Since then, we've worked together to strengthen network infrastructure to enable Fiber-to-the-Home, which brings high-speed internet directly to homes, and Fixed Wireless Access, which delivers wireless broadband to rural areas where laying cables is difficult. To date, Brisanet has brought 1 million people online, creating unprecedented opportunities for communities that were previously left behind. Three years in, we are on track to meet our shared goals to help these services take root across the community, empowering people to pursue jobs, advance their careers, and improve their overall wellbeing. And connectivity is proving to be the foundation for even more impact. Today, this partnership is a blueprint for integrated progress: helping rural farmers transition to clean energy and use this to also irrigate more sustainably during dry seasons. They are able to diversify crops with options like pitaya and acerola. Digital inclusion and climate action are deeply connected. When communities can access both connectivity and clean energy, they gain adaptability and the capacity to thrive in the face of global challenges. 🎥 Watch this video to learn more: 

  • View profile for Sharat Chandra

    Blockchain & Emerging Tech Evangelist | Driving Impact at the Intersection of Technology, Policy & Regulation | Startup Enabler

    48,270 followers

    #blockchain | #defi : The US Commodity Futures Trading Commission (CFTC) has recently released a comprehensive report addressing the challenges and opportunities in the rapidly evolving world of Decentralized Finance (DeFi). The report underscores the critical need for clear lines of responsibility and accountability within the DeFi space, urging policymakers to take proactive measures in areas such as #antimoneylaundering and #digitalidentity . Key Recommendations from the CFTC Report: 1️⃣ Resource Assessment and Mapping: Emphasizing the importance of technical capacity, the report calls for increased understanding of DeFi. Mapping existing DeFi structures will aid in highlighting interconnections, threat vectors, and potential cybersecurity vulnerabilities. The goal is to develop continuous data gathering, monitoring, information sharing, and regulatory partnerships. 2️⃣ Regulatory Perimeter Examination: The CFTC encourages a thorough examination of the regulatory perimeter, using the mapped data to determine the inclusion of DeFi products and services within the US financial regulatory framework. This includes assessing compliance levels, identifying regulatory gaps, and potentially expanding frameworks to address associated risks. 3️⃣ Risk Identification and Prioritization: The report delves into various risks such as asymmetric information, operational vulnerabilities, liquidity mismatches, and market manipulation. Understanding the financial and technological complexity of DeFi compositions is crucial. This includes evaluating risks related to algorithmic failures, concentration, and illicit finance. 4️⃣ Policy Responses: To address identified risks, the CFTC proposes a range of potential policy responses. These include measures like disclosure, regulatory reporting, third-party auditing, entry restrictions, governance regulation, and more. Striking the right balance between #innovation and risk mitigation is at the core of these proposed responses. 5️⃣ Engagement and Collaboration: Fostering greater engagement and collaboration with domestic and international standard setters, regulatory efforts, and DeFi builders is highlighted as a key step. This collaborative approach aims to create a well-informed and adaptive regulatory environment for the evolving DeFi landscape. The CFTC's report marks a significant milestone in the ongoing dialogue surrounding DeFi regulation. As the industry continues to mature, these recommendations provide a solid foundation for shaping policies that balance innovation and risk management. 💡🌐 #DeFi #Regulation #InnovationInTheFuture

Explore categories