Skip to content

Brought to you by

Dentons logo

Dentons Data

Your trusted advisor for all things digital.

open menu close menu

Dentons Data

  • Home
  • About Us

The Canadian Privacy Commissioner’s new age assurance guidance: impacts on business

By Danielle Dudelzak
May 11, 2026
  • Children's Privacy
  • Guidance
Share on Facebook Share on Twitter Share via email Share on LinkedIn

The Office of the Privacy Commissioner of Canada (“OPC”) has released two complementary sets of guidance – one for websites and online services (“Online Guidance”) and one for developers of age assurance systems (“Developer Guidance”) and is accepting comments on both documents until August 4, 2026. Together, they represent the most detailed articulation of the OPC’s expectations in this space to date.

The term ‘age assurance’ encompasses age verification (calculating the difference between a verified date of birth and a subsequent date), age estimation (analysis of biological or behavioural features that vary with age), and age inference (using verified information that indirectly implies an individual is over or under a certain age). The privacy impact of any mechanism depends on its particular design and implementation.

The core tension

There is broad consensus that more must be done to reduce the potential harms children face online. Yet many of the more robust methods of achieving this goal — such as biometric age estimation — require the collection of even more sensitive personal information. The OPC acknowledges these tensions in the guidance, positioning age assurance as just one tool among many rather than a default condition for accessing the Internet. A proportionate approach is required to ensure that the privacy intrusion and friction introduced by age assurance are justified by the severity of the harm being addressed.

Websites and online services

The Online Guidance makes three main points:

Blocking all children not an option

According to the Online Guidance, businesses will not be able to simply block children from accessing their websites simply because they wish to avoid the issue altogether.  Where a business intends to prevent access to content or a service, “it should be able to demonstrate that there is a legal requirement to do so or that there is a potential harm specific to children that warrants differentiating users by age”.

Where demonstrably necessary, age assurance should be appropriate and proportional

Where a business does intend to accommodate users who are children by adjusting their personal information practices, it should not immediately jump to implementing age assurance. Instead, businessesshould first determine whether age assurance is actually necessary by determining whether it’s personal information practices pose  a potential harm to children that warrants differentiating users by age, assess whether a non-trivial number of children are likely to access the service, and whether less privacy-invasive alternatives – such as off-by-default settings or advertising restrictions – could adequately address these potential harms..

Where age assurance is warranted, businesses should then consider the nature of the method used. The  extent and sensitivity of personal information collected by the age assurance method chosen must be proportionate to the severity of the risk being addressed, and its application can be limited to the specific point where age differentiation is needed – for instance, by segregating age-restricted content – rather than imposed as a blanket condition for access.

Implementation must be privacy-protective

Having decided to use an age assurance method, businesses must then ensure that it is used in a privacy-protective manner by restricting the result to its intended purpose, not attempting to correlate visits by the same individual, providing users with options and appeal mechanisms, and limiting the number of times an individual must undergo the age assurance process.

Examining the Online Guidance in more detail

Determining whether age assurance is required

Businesses often dismiss the idea that they collect children’s personal information unless they actively target this demographic. Targeting, however, is not what triggers obligations under the Online Guidance – it appears to apply  whether a website or online service actively invites children or children simply access it. As a result, all operators of such websites or services should be able to establish that at least one of the following conditions is met to ensure the website or service is safe for children:

  1. The website or online service is likely to be accessed only by a trivial number of children;
  2. Access to the website or online service is not likely to pose a potential harm to children; or
  3. Appropriate measures are taken to mitigate the potential harms to children.

What constitutes a “non-trivial number”?

The OPC clarifies that this evaluation requires an organization to understand high-level audience metrics based on the design, nature, and content of the site or service. It does not require collecting age-related information from all users to determine precise demographics. However, as long suspected, an “unenforced statement in a privacy policy (or similar document) that individuals under a given age are not intended or permitted to use the site or service would not be a meaningful factor in establishing the actual age demographics of users.”

If only a trivial number of users are likely to be children, age assurance should not be deployed, in the interests of limiting collection of personal information.

Does access pose a potential harm?

Not all collections and uses of personal information related to children pose potential harm. The OPC identifies certain standard practices unlikely to meet this threshold:

  • Using first-party cookies or other methods to collect analytics data about site use;
  • Allowing a user to submit an email address to receive a newsletter;
  • Allowing a user to save preferences by creating an account on a service.

By contrast, certain practices are more likely to meet the harm threshold:

  • Services that allow unrestricted private contacts between users, creating the potential for a child to be exploited by an adult user;
  • Services that create detailed profiles of users, which are sold or used to create exploitative or age-inappropriate advertising;
  • Services that encourage users to divulge sensitive personal information about themselves.

The more extensive the collection, or if such collection includes sensitive information, the more thorough the examination of potential harms should be. Of course, the nuances matter – many businesses will find themselves in a grey zone, such as a social media service with personalization features that are neither clearly exploitative nor entirely benign.

What are appropriate measures that would mitigate the potential harms?

Age assurance can be a legitimate approach to harm mitigation, but will not always be the recommended approach. Design choices that minimize the risk of negative impacts from the collection and use of personal information, as well as secondary impacts (such as reducing individuals’ willingness to use a service or access content even if they would not be subject to an age-based restriction), need to be considered here.

Consider alternatives before deploying age assurance

As mentioned, even where a website or service is likely to be accessed by children and poses potential harm to them, businesses should not immediately jump to developing an age assurance strategy. The OPC expects organizations to first consider whether reasonable, less privacy-invasive measures could address the potential harm. In the context of behavioural advertising, for example, this could include:

  • Prohibiting (or only using ad services that prohibit) the use of any inference that a user is, or may be, a child for the purpose of behavioural advertising;
  • Discontinuing behavioural advertising upon becoming aware that the user is a child (for instance, automatically opting out any user who indicates they are a child during account creation or whose device sends a signal indicating that the user is a child); and
  • Making appropriate opt-out controls readily available.

Only where such alternatives are insufficient should the organization proceed to consider what form of age assurance is appropriate and when it should be applied.

Select a proportionate method and apply it narrowly

The age assurance method should match the level of risk — stronger verification for serious risks to children, lighter-touch methods where the risk is lower. Age checks should happen only when needed to prevent a specific harm, like accessing age-restricted content, not as a blanket requirement for the whole site. One way to do this is to separate out age-restricted content so users only go through age verification when they try to access it, or to keep potentially harmful features turned off by default until a user confirms they’re old enough.

The OPC also keeps the door open for the EU-style digital wallet model, noting that it would be preferable for organizations to accept a credential from a digital wallet or a browser-based signal should those become trusted and commonplace for Canadian Internet users. This underscores a broader expectation: businesses must regularly revisit their choice of age assurance method to determine whether developments in the field would allow the necessary effectiveness to be achieved in a more privacy-protective manner.

Developers and Age Assurance Providers

The companion guidance sets out detailed design expectations for the entities responsible for building and operating age assurance systems. Once the decision to use age assurance has been made, providers must comply with requirements around six core design considerations:

  1. Minimize collection and avoid retention. Collect only the minimum personal information necessary and delete it once an age signal is generated. Consider on-device processing to limit server-side exposure.
  2. Limit the information in the age assurance result. The result sent to the relying party should contain no more than a yes/no signal or age range — no metadata, unique identifiers, or information about the individual. The guidance suggests that if a relying party requires more granular information about an individual, such as their exact age, the developer/provider will be obliged to request a justification for the inclusion of this granular information.
  3. Avoid secondary use or disclosure. Personal information collected for age assurance must not be used for any other purpose or disclosed. Providers should demonstrate compliance through independent audits or conformity assessments.
  4. Minimize information generated during the process. Any mid-stage data created by the system (such as biometric representations) must be immediately deleted after processing.
  5. Do not retain information about online activities. Providers must not profile individuals based on where they are being age-assured. The OPC strongly encourages “double anonymity” — where the provider does not know what content is being accessed and the relying party does not know the individual’s identity.
  6. Do not disadvantage any group. Providers must proactively assess whether accuracy is reduced for any population group and ensure equally privacy-protective alternatives are available where limitations are identified.

Practical considerations with regard to compliance

As with any guidance, the principles are clear but putting them into practice will require working through the operational detail.

Considerations for websites and online service providers:

  • Assessing “potential harm” is subjective. The guidance provides illustrative examples at either end of the spectrum but offers no clear methodology for the many businesses operating in the grey zone between obviously harmful and obviously benign practices.
  • Determining whether a “non-trivial number” of children access a service is imprecise. Organizations must rely on indirect signals – design features, third-party ad placements, research on similar services – without collecting age data. The standard for what constitutes sufficient evidence remains to be seen.
  • Segregating age-restricted content is technically difficult for dynamic platforms. For platforms with algorithmically generated feeds, user-generated content, or dynamically served material, identifying and isolating content that meets the threshold for “age-restricted” in real time is a significant engineering and moderation challenge.
  • Appeals without data retention create difficulties. Organizations would have to offer an appeal process for individuals wrongly denied access, but the companion developer guidance prohibits retaining the personal information used in the original age assurance check, making it difficult to understand why a result was incorrect or to offer a meaningful review.
  • The correlation prohibition conflicts with account-based age signals. The guidance prohibits correlating visits using age assurance results, yet suggests linking age signals to user accounts to avoid repeat verifications. These requirements serve different purposes: the prohibition targets using age check characteristics (timing, method, token) for tracking, while account association reduces repeated sensitive data collection. They can coexist — for example, by storing only a generic boolean flag (e.g., “over 18: true”) without metadata about when or how verification occurred, and keeping this flag separate from analytics systems. The key is ensuring age assurance does not add a new correlation vector, even if other mechanisms (login, cookies) enable session linking.

Considerations for developers and age assurance providers:

  • Achieving “double anonymity” is architecturally complex. Building systems where the provider does not know what content the individual is accessing and the relying party does not know the individual’s identity requires intermediary layers, digital wallets, or zero-knowledge proof infrastructure that may be difficult to deploy at scale.
  • Immediate deletion undermines accuracy improvement. The requirement to delete personal information once an age signal is generated makes it difficult for providers to retrain models, handle appeals, or investigate complaints.
  • Equity testing requires data the guidance restricts collecting. Providers must proactively assess whether their systems are less accurate for certain groups, yet data minimisation requirements limit what demographic information they can collect. Meaningful bias testing typically requires labelled datasets with sensitive demographic attributes — creating a direct tension between the equity and minimisation obligations. As per the guidance, should it not be possible to reasonably address biases within an age assurance system, developers must disclose this to any potential relying party, which would make it difficult for a relying party to continue with its use of the system.  

Getting Ready: Steps for Businesses

Organizations should approach this guidance as an opportunity to reassess their data practices holistically, prioritize less invasive alternatives where possible, and build documentation that demonstrates accountability. With the comment period open until August 4, 2026, now is also the time to engage with the OPC on the practical realities of compliance. For further guidance on implementing the OPC’s age assurance framework or preparing a submission, please reach out to your Dentons privacy contact.

Conduct a risk assessment. Understand high-level audience metrics based on the design, nature, and content of your site or service. Document whether a non-trivial number of children are likely to access your service, and whether your personal information practices pose potential harm to them. Evaluate the nature and severity of potential harms on your platform or service offering before deciding whether age assurance is warranted.

Exhaust less-invasive alternatives first. Before deploying age assurance, consider whether measures such as content moderation (human review or AI-based tolls), off-by-default settings for potentially harmful features, restricted recommendation algorithms, or opt-out controls could adequately mitigate identified risks. If user generated content appears on your website, establish clear community guidelines and enforce them consistently. Consider defaulting new accounts to the most restrictive privacy and safety settings, and offering a “safe mode” feed that excludes potentially problematic recommendations.

Apply proportionality in method selection. Where age assurance is appropriate, choose the least intrusive method that effectively addresses the identified risk. Reserve more robust verification (biometrics, government ID) for situations involving significant potential harm to children.

Avoid blanket age-gating. Apply age assurance only at the specific point where users must be differentiated by age to prevent harm — not as a universal access condition. Segregate age-restricted content or adopt off-by-default approaches wherever feasible.

Document your rationale and revisit regularly. Maintain clear records of why you chose (or chose not) to implement age assurance, the risks identified, and how your approach is proportionate. Regularly reassess whether newer, more privacy-protective technologies could achieve the same result. These records will be essential for demonstrating accountability as enforcement expectations evolve.

The OPC’s new age assurance guidance marks an important step toward balancing children’s online safety with broader privacy principles. While the guidance provides welcome clarity on expectations, businesses face genuine implementation challenges — from subjective harm assessments to the technical complexities of double anonymity and content segregation. What is clear is that age assurance is not a default requirement; it is a proportionate response to demonstrated risks.

Organizations should approach this guidance as an opportunity to reassess their data practices holistically, prioritize less invasive alternatives where possible, and build documentation that demonstrates accountability. With the comment period open until August 4, 2026, now is also the time to engage with the OPC on the practical realities of compliance.


For further guidance on implementing the OPC’s age assurance framework or preparing a submission, please reach out Danielle Dudelzak or other members of the Dentons Privacy and Cybersecurity group.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest blog posts by email.
Stay in Touch
Children's Privacy, Guidance, Privacy Commissioner
Danielle Dudelzak

About Danielle Dudelzak

Danielle Dudelzak (She/Her/Hers) is an associate in the Corporate group at Dentons.

All posts Full bio

RELATED POSTS

  • Guidance
  • Privacy

Privacy Commissioner issues statement on obligations to protect against data scraping

By Kirsten Thompson and Melika Mostowfi
  • General
  • Guidance
  • Privacy

Privacy Commissioner of Canada updates guidance regarding sensitive personal information

By Sasha Coutu
  • Guidance
  • Privacy

Privacy During a Pandemic: Privacy Commissioners Issue Guidance

By Kirsten Thompson

About Dentons

Redefining possibilities. Together, everywhere. For more information visit dentons.com

Grow, Protect, Operate, Finance. Dentons, the law firm of the future is here. Copyright 2023 Dentons. Dentons is a global legal practice providing client services worldwide through its member firms and affiliates. Please see dentons.com for Legal notices.

Check out more at Dentons.com

Clarifying the law on digital and AI sovereignty

As the race for AI development and adoption accelerates, claims for data sovereignty and concerns about extraterritorial legal reach rise. [...]

Global data privacy and AI case law review

Global: Welcome to the first edition of Dentons' global data privacy and AI case law review, looking back at the end of 2025. This update contains submissions from Dentons colleagues [...]

Episode 28: Redefining “Smart” in an Age of AI, Connected Systems & Data, panel at the Smart Cities Expo World Congress in Barcelona

Canada:  From the Smart Cities Expo World Congress in Barcelona, Kat Sliwa moderates a panel with Todd Daubert, Taj Forer, Megan Higgins and Jacqueline Lu to explore what it truly [...]

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo in black and white

© 2026 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site