AI systems present information without citing sources. Users cannot verify accuracy. Publishers cannot measure usage. Brands cannot track representation.
Transparency in AI systems is possible. Source attribution enables users to verify information, publishers to measure usage, brands to track representation, and AI companies to establish sustainable content access. The technology exists. The standards can be established. What's required is collective action.
OpenAttribution ensures publishers, brands, and data providers can track, verify, control, and monetize how AI systems use their proprietary information.
Content owners know when and where their information is used
Proper credit when content informs AI outputs
Faithful representation of content and brand messaging
Appropriate compensation when content drives value
Ensure content creation remains economically viable
Verify how you're represented and correct misrepresentations or harmful associations
AI Systems Use Content Without Transparency or Accountability
Brand authenticity matters
Legitimate businesses need verification and control
Publishers and writers
Creating quality content that powers AI
AI companies market systems as authoritative information sources and "trusted thinking partners." Terms of service disclaim liability for accuracy and place verification responsibility on users. Without source attribution, users cannot fulfill this responsibility. Air Canada was held liable when its chatbot hallucinated a refund policy—the court ruled companies are responsible for outputs they market as authoritative. Publishers report 20-40% traffic declines without measurement. Brands cannot verify their representation. Content is redistributed without attribution or accountability mechanisms.
AI companies market their systems as "trusted thinking partners" and authoritative information sources. Their terms of service disclaim all liability for accuracy, requiring users to verify outputs. Courts are rejecting this approach—Air Canada was held liable for its chatbot's hallucinated policy. Companies cannot market authority while disclaiming accountability for accuracy.
AI presents information without citing sources. Users cannot verify claims, check context, or assess reliability. Without attribution, users cannot exercise the "human review" AI companies require in their disclaimers. This places responsibility on users while removing the tools to fulfill that responsibility.
AI systems consume articles, research, and content, then present synthesized responses without attribution. Content creators get no credit, no traffic, and no ability to measure their contribution to AI outputs.
Brands cannot verify how they're represented in AI outputs. Wrong prices, outdated specifications, associations with fraudulent competitors - legitimate businesses have no visibility into AI representations and no way to correct misrepresentations.
Curated product catalogs, pricing information, specifications, and availability data - information requiring significant investment - gets redistributed by AI systems without attribution, measurement, or compensation mechanisms.
Publishers
Articles, research, content
AI Systems
Train & respond
Revenue
AI companies profit
Publishers Get
One-off deals without measurement
AI companies market systems as authoritative while disclaiming liability for accuracy. OpenAI's terms require users to verify outputs and warn against relying on them as truth—yet they market ChatGPT as a "trusted thinking partner." Without source attribution, users cannot perform the verification companies require. Air Canada learned this: held liable for its chatbot's hallucinated policy, the court ruled that companies are responsible for what their systems output. Meanwhile, search traffic funding content creation collapses—news sites see 56-69% zero-click searches, publishers report 20-40% traffic declines. No measurement exists for how extensively content powers these "authoritative" AI outputs.
20-40%
Publisher traffic lost
56-69%
Zero-click searches
???
Unmeasured value
Standards, Policy, and Negotiation
OpenAttribution is a coalition of publishers, brands, and technology providers establishing standards for source attribution in AI systems. We develop technical specifications like ACAS, advocate for policy frameworks that clarify legal obligations, and negotiate directly with AI providers to implement transparency at scale.
Example AI Response with ACAS:
> AI Response: "The best restaurants in NYC include..." + acas_source: "timeout.com/nyc-restaurants-2024" + acas_timestamp: "2025-01-15T10:30:00Z" + acas_usage: "reformulation" + acas_confidence: 0.92
Building attribution systems like ACAS that track content usage
Shaping fair use laws and regulations for the AI era
Creating real compensation paths for publishers and content owners
Negotiating directly with OpenAI, Anthropic, Google, and others
Recruit major publishers, brands, and technology providers. Establish collective bargaining power.
Open direct negotiations with OpenAI, Anthropic, Google. Push for transparency frameworks.
Launch ACAS and other attribution tools. Track real usage data and demonstrate extraction scale.
Create industry standards, influence regulation, and build sustainable compensation models.
Publishers, brands, and technology providers establishing attribution standards and negotiating collectively with AI companies. Early membership shapes the frameworks. Delayed participation means accepting terms established without your input.
Publishers
Media, news, editorial
Brands
Retailers, manufacturers
Data Providers
Content, pricing, catalogs
Your content powers AI systems that capture your traffic. Publishers report 20-40% traffic declines. Without measurement, there is no basis for negotiation. Attribution provides visibility into usage. Visibility enables fair compensation frameworks.
Your product information powers AI shopping recommendations. You cannot verify how you're represented. Wrong prices, outdated specifications, associations with fraudulent competitors. Legitimate brands have no visibility into AI representations and no mechanism to correct misrepresentations that damage trust.
Attribution infrastructure is the next frontier. Just as CDNs and analytics became essential web infrastructure, ACAS will become essential AI infrastructure. Early movers shape the standards.
AI systems require chips, power, and content. Content is currently undervalued. As reasoning capabilities commoditize, content quality and uniqueness become competitive differentiators. Without transparency frameworks, AI providers face fragmented negotiations with thousands of publishers, litigation risk, and potential content withdrawal. Attribution establishes sustainable content access at scale: negotiate once with coalitions, not individually. Transparency mechanisms solve the content access problem so AI companies can focus on what matters—building better models.
Technical specifications, implementation guides, and frequently asked questions
OpenAttribution serves anyone whose proprietary information powers AI responses. This includes publishers losing traffic, brands unable to verify their representation, data providers whose curated information is scraped, and technology companies building attribution infrastructure. If AI systems use your information without transparency, this coalition addresses your concerns.
Members gain collective bargaining strength instead of negotiating individually. The coalition develops technical tools like ACAS, shapes policy, and engages AI providers directly. Greater membership increases leverage for establishing transparency standards and fair compensation frameworks.
Technical expertise is not required. While we develop technical standards, most members are business leaders, content owners, and brand managers. Participation strengthens collective bargaining power and helps shape policies that serve your business interests.
AI business models are being established now. Early members shape the standards and gain first access to attribution data and compensation frameworks. Delaying membership means accepting terms established without your input. The current window for influence will close as these practices solidify.
OpenAttribution establishes sustainable frameworks for AI systems to access quality content while providing accountability mechanisms. AI systems require quality data. Content creators require measurement and attribution. Users need transparency to verify information. Source attribution serves all parties by clarifying legal standing and enabling fair market mechanisms.
Courts are saying no. OpenAI markets ChatGPT as a "trusted thinking partner" while its terms disclaim liability and require users to verify outputs. Air Canada marketed its chatbot as authoritative but was held liable when it hallucinated a refund policy. The tribunal ruled that companies are responsible for what their systems output, regardless of disclaimers. Without source attribution, users cannot perform the verification that disclaimers require, yet AI companies market their outputs as trustworthy. Attribution resolves this: it enables the verification that marketing claims of authority demand.
AI companies face incompatible legal positions. They defend training as "transformative use" in copyright cases, yet some attempt liability protections designed for platforms distributing user content. Courts are skeptical—a Florida judge questioned whether AI outputs qualify as protected speech, and OpenAI's CEO told Congress Section 230 isn't the right framework for AI. Source attribution creates verifiable accountability, enabling users to verify claims, content creators to measure usage, and courts to assess legal responsibilities.
Complete ACAS v1.0 technical specification and protocol structure
Request Access →AI companies market systems as authoritative while disclaiming liability through terms of service. OpenAI's terms declare services "as is," place verification responsibility on users, and warn not to rely on outputs as truth. Yet OpenAI markets ChatGPT as a "trusted thinking partner." Courts are rejecting this contradiction. Air Canada was held liable when its chatbot hallucinated a refund policy—the tribunal ruled companies are responsible for outputs they deploy in customer-facing contexts, regardless of disclaimers.
Additional legal tensions compound this. In copyright litigation, AI companies defend training as "transformative use." In liability cases, some attempt Section 230 immunity designed for platforms distributing user content, not companies generating content. A Florida judge questioned whether AI outputs even qualify as protected speech. OpenAI CEO Sam Altman testified: "I don't think Section 230 is even the right framework." The EU AI Act moves toward product liability, where developers of high-risk AI systems face stricter accountability standards regardless of disclaimers.
Source attribution resolves these contradictions. Attribution enables the verification that marketing claims of trustworthiness require. Users can assess reliability. Content creators can measure usage, establishing basis for fair negotiation. Courts can verify accountability claims. This framework clarifies legal standing and enables sustainable market mechanisms for all parties.
Interested in joining the coalition? Contact us to discuss membership and next steps.
Why This Matters
AI systems consume content at scale without attribution. Content owners cannot measure usage or establish negotiating positions. Users cannot verify sources or assess reliability. Without quality content, AI systems degrade. Transparency mechanisms serve all parties by establishing measurement, accountability, and sustainable access to information.
AI business models are being established now. Early members shape the standards and establish negotiating positions. Delayed participation means accepting frameworks established without your input.
20-40%
Publisher traffic lost to AI
56-69%
Zero-click searches
???
Unmeasured value
OpenAttribution establishes transparency standards and negotiates collectively with AI providers.