Download this commentary (PDF)
In brief...
A recent technology conference examined how the quest to translate research into policy can be complex, constantly evolving, and shaped by timing, incentives, communication gaps, and overlapping institutional roles. Rather than a closed loop, it’s an ongoing negotiation between researchers, agencies, and other actors striving to move in the same direction.
On June 6, 2025, the University of Maryland (UMD) Tech Policy Hub hosted its first annual event, “Closing the Loop: How Tech Policy and Research Can Shape Each Other.” Held at the UMD’s College Park Campus, the event convened academics, policymakers, technologists, and civil society leaders to explore four pressing domains in technology governance: cybersecurity, consumer privacy, artificial intelligence (AI), and information integrity. Hosted with the College of Information Studies, the event aimed to “crystallize findings on how to build the bridge between tech policy research and practice,” while emphasizing the importance of translating research into policy. Across the panels, one core question emerged: How do we ensure that policy and research not only inform each other but evolve together?
While the event looked ahead to emerging issues, it also acknowledged the practical hurdles involved in turning research into policy. Participants emphasized that this process is rarely linear, it’s complex, constantly evolving, and shaped by timing, incentives, communication gaps, and overlapping institutional roles. Rather than a closed loop, it’s an ongoing negotiation between researchers, agencies, and other actors striving to move in the same direction. As one participant noted, when the federal government is absent, states, markets, and international actors step in. This patchwork response can drive innovation, but also create fragmentation and confusion.
Panel Highlights
Bridging Research and Practice on Consumer Privacy
During the panel on consumer privacy, Michelle Mazurek (UMD) presented empirical findings that highlighted the gap between the promise and reality of data transparency tools. Her team’s research focused on Data Subject Access Requests (DSARs), a legally mandated process that allows individuals to request access to the personal data companies hold about them. The study found that while DSARs are intended to give users control, they often overwhelm them with complex, dense, or unclear data. This results in a transparency paradox: the more information made available, the less actionable it becomes. Mazurek called for thoughtful interface designs that reflect how users interact with their data, designs that clarify intent, filter noise, and empower meaningful choices. Drawing from her team’s findings, these include in-line definitions, embedded explanations, search and filter functions, and visual summaries that reduce cognitive overload and make privacy controls more meaningful. She also flagged the spread of misleading Virtual Private Networks (VPN) ads on platforms like YouTube, suggesting that influencer-driven privacy narratives may do more harm than good.
Paul Ohm (Georgetown University Law Center) argued that effective privacy policy depends not just on new legal and technical ideas, but on creating strong institutional bridges between researchers and policymakers. He illustrated this by tracing the evolution of privacy law, from foundational torts to current debates on fairness, algorithmic accountability, and differential privacy, showing how scholarship has long shaped policy. Ohm emphasized the importance of translational mechanisms, such as secondments and intergovernmental personnel assignments, which enable researchers to work within government. These institutional bridges are critical for ensuring that academic knowledge informs regulatory decisions, an approach reflected in the Federal Trade Commission’s engagement with researchers through its annual PrivacyCon event, which brings scholars directly into policy conversations.
Governing AI Without Waiting for Congress
The AI governance panel examined how researchers and civil society organizations are stepping into a regulatory void as federal policy lags behind technological developments. Christabel Randolph (Center for AI and Digital Policy) introduced the AI and Democratic Values Index, which benchmarks how 80 countries align their AI policies with democratic principles like public participation, independent oversight, and transparency. Her presentation emphasized that responsible AI governance must be rooted in fundamental rights and the rule of law, not just efficiency or innovation. The 2025 Index places the United States in Tier III, signaling that while it endorses major international AI treaties, it still falls short on cohesive domestic legislation and comprehensive oversight compared to top-ranking countries like Canada, Japan, and the Netherlands.
Carl Hahn (Gentic Global Advisors) offered a private sector perspective, arguing that companies have a fiduciary duty to deploy AI systems in ways that are ethical, transparent, and aligned with stakeholder interests. In the absence of clear federal regulation, he advocated for the formation of voluntary consortia, industry-led groups where companies, experts, and sometimes regulators collaborate to define shared terminology, performance metrics, and governance practices for AI. Examples of such initiatives include the Partnership on AI or sector-specific working groups that aim to build trust and set informal standards. Hahn warned that without this kind of proactive coordination, regulatory uncertainty could stall innovation and erode public confidence in AI systems.
Anna Lenhart (UMD) added a pragmatic policy perspective on how meaningful AI regulation can move forward even in the absence of federal legislation. Drawing from her recent writing in Tech Policy Press, including “Do We Need a NIST for the States?” and “Leveraging International Standards to Protect US Consumers Online, No Congress Required,” she outlined a vision in which U.S. states take the lead by referencing international standards in their own enforceable frameworks. This approach, she argued, would allow for timely oversight without waiting on congressional action. While other panelists spoke about the influence of soft law, Lenhart focused on how international standards could be embedded directly into “hard law” at the state level, creating legally binding safeguards grounded in globally recognized principles.
Information Integrity in a Decentralized Age
In the final session, Giovanni Luca Ciampaglia (UMD) unpacked the role of recommender systems, algorithms designed to personalize content feeds and maximize user engagement, in shaping public opinion. His research showed that platforms optimized for engagement often end up amplifying low-quality or hyper-partisan content. Ciampaglia proposed measuring “audience diversity” through tools like Shannon entropy, which captures the unpredictability or variety of audience political leanings, and the Gini index, which reflects how evenly that audience is distributed across the political spectrum. These metrics, he explained, can help identify platforms that attract more ideologically diverse users. He also discussed emerging models like algorithmic marketplaces and agentic AI, suggesting that decentralized approaches to content curation could offer a healthier alternative to current systems.
Other panelists contributed perspectives on the tensions between regulation, platform accountability, and the realities of content moderation. The panel made clear that addressing information integrity will require not just better algorithms, but also stronger civic norms and institutional guardrails.
Closing Thoughts
Throughout the day, participants returned to a shared challenge: translation. Researchers must learn to tell the story of their work in ways that resonate with policymakers, while policymakers must remain open to evidence that may challenge their assumptions. One of the event’s most compelling threads was how scholars from different disciplines—law, policy, economics, and computer science—approached that challenge from distinct but complementary angles. Legal scholars emphasized durable institutional mechanisms and regulatory precedent, focusing on how ideas become policy through translational infrastructure. Policy and political science experts focused on subnational innovation and the strategic use of international standards. Technologists offered empirical tools to expose and mitigate systemic risks. Despite their varied approaches, all converged on a shared goal: bridging the gap between knowledge and action.
Bridging the loop means more than aligning incentives or timelines. It means designing systems, human and institutional, that allow truth, transparency, and trust to flow in both directions. UMD’s Tech Policy Hub has created a much-needed space for this kind of exchange. The challenge now is to carry that vision beyond a single event and embed it in the everyday mechanics of tech governance.