The latest telecom security developments point to a problem that is bigger than any single operator; The latest telecom security developments point to a problem that is bigger than any single operator: the signaling layer of mobile networks still carries too much implicit trust.
In the UK, Ofcom has moved to restrict the leasing of Global Titles — signaling identifiers used inside the international mobile network ecosystem. Around the same time, Citizen Lab published research into covert location tracking through global telecom signaling infrastructure.
These cases are not about apps asking for location permission. They are not about GPS. They are not even necessarily about malware on the phone.
They are about the mobile network’s own backend logic: who is trusted, what they can ask for, and whether sensitive location context is stripped before it crosses the wrong boundary.
The Telia Norway case fits into this wider picture as a separate SIP/IMS example. It should not be confused with the Ofcom or Citizen Lab findings. There is no public evidence that Telia was part of the Citizen Lab investigations, and no public evidence that Telia’s issue involved Global Titles, SS7 or Diameter.
The connection is architectural, not evidentiary.
Some of the most serious mobile privacy failures happen below the app layer, inside signaling systems users never see.
The short version
What happened?
Ofcom restricted Global Title leasing, while Citizen Lab documented covert telecom surveillance campaigns using global signaling infrastructure.
Why it matters
Telecom signaling can expose or obtain location-sensitive information without GPS access, malware or any visible action on the phone.
Where Telia fits
Telia Norway is a separate SIP/IMS case showing how ordinary call setup can expose cell-level access-network context.
Different protocols. Same trust problem.
The global signaling problem
Mobile networks depend on signaling. Before a call connects, before a text is delivered, before roaming works, network systems exchange technical messages about subscribers, routing, authentication, access networks and service state.
That system was built around trust between operators and network partners. In a smaller telecom world, that made sense. Today, the ecosystem includes roaming hubs, MVNOs, interconnect providers, SMS platforms, private operators, vendors and commercial surveillance actors.
That makes the trust surface much larger.
The core question is no longer only whether a protocol works as designed. It is whether the right parties are allowed to send the right signaling messages, and whether sensitive network context stops at the correct boundary.
What Ofcom found
Ofcom’s action focuses on Global Titles.
Global Titles are signaling identifiers used by mobile operators and related network actors. They help route signaling messages across mobile networks, including for roaming, messaging and call routing.
In the wrong hands, they can become an entry point into capabilities that should only be available to trusted telecom actors.
Ofcom warned that leased Global Titles can be misused to intercept or divert calls and messages, obtain information held by mobile networks and, in some cases, track people’s physical location globally.
The regulatory action began in 2025. New Global Title leasing arrangements were banned from 22 April 2025. Existing arrangements were given a transition period until 22 April 2026, with two narrow migration exceptions allowed until 22 October 2026.
This is the important part: Ofcom did not just say “monitor this better”. It moved to restrict the leasing model itself.
That suggests the problem was not only technical. It was structural.
The technical failure in the Ofcom case
The failure is not that Global Titles exist. They are part of how telecom signaling works.
The failure is that they could be leased or made available in ways that allowed non-operators to appear inside the signaling ecosystem with operator-like trust.
Once a party can send signaling messages through a trusted route, the receiving network may treat those messages as legitimate network traffic rather than as an external privacy risk.
That turns a business arrangement into a security boundary problem.
In practice, the risk comes from a combination of:
- signaling identities being used outside their original operator context
- weak governance over who can use them
- insufficient enforcement of signaling trust boundaries
- access to messages that may obtain routing, subscriber or location-related information
What Citizen Lab found
Citizen Lab adds the operational evidence.
Its research describes two sophisticated telecom surveillance campaigns and links real-world attack traffic to mobile operator signaling infrastructure. The report says suspected commercial surveillance vendors exploited the global telecom interconnect ecosystem, leveraged private operator networks and conducted covert location tracking operations that could persist undetected for years.
This matters because it moves the discussion beyond theory.
SS7 and Diameter abuse has been discussed for a long time. Citizen Lab’s findings connect that risk to actual surveillance activity and commercial actors operating through telecom infrastructure.
The important point is not that every telecom incident is part of the same campaign. It is that the mobile signaling layer remains a viable path for location tracking when access controls, filtering and trust boundaries fail.
Who was behind it?
Citizen Lab does not publicly identify the commercial surveillance vendors behind the campaigns with certainty. The report describes them as suspected commercial surveillance vendors.
That distinction matters.
Telecom attribution is hard. Malicious or abusive signaling can move through legitimate-looking operator paths and blend into large volumes of ordinary roaming and interconnect traffic.
That is part of the severity. This does not necessarily look like a normal cyberattack.
It can look like telecom signaling.
The technical failure in the Citizen Lab case
The Citizen Lab case is about abuse of the global telecom interconnect ecosystem.
Mobile operators around the world are connected through signaling protocols and roaming agreements. Those systems allow networks to locate subscribers, route messages, authenticate roaming users and provide service across borders.
But the same capabilities can become surveillance tools if the actor asking the question should not be trusted.
In simplified terms:
A party that appears trusted inside the telecom signaling ecosystem may be able to ask location-sensitive questions about a mobile subscriber.
The mechanisms discussed by Citizen Lab include SS7, Diameter, private operator networks and interconnect pathways. These are not app-layer attacks. They operate below the user-visible layer of the phone.
That is why the user may see nothing. No permission prompt. No suspicious app. No login alert. No obvious device behavior.
Scope and severity
The scale is large because the issue sits in the global mobile ecosystem, not inside one app or one handset model.
Citizen Lab’s FAQ describes a mobile ecosystem with more than a thousand operators connected through signaling protocols and roaming agreements. That does not mean all operators are malicious or all users are being tracked. It means the trust surface is global.
The severity is high because the affected layer is mostly invisible to users and difficult to audit from the outside.
At the same time, the public evidence does not support claiming mass tracking of ordinary users in the Citizen Lab case. Citizen Lab says these campaigns typically target high-profile individuals and that it has not identified instances of regular users being tracked.
So the careful summary is this:
This is probably not a mass-consumer exploit. It is a serious infrastructure trust problem.
Where the Telia case fits
The Telia Norway case is a useful local example because it shows a more ordinary version of the same trust-boundary problem.
Unlike the Ofcom and Citizen Lab cases, Telia appears to involve SIP inside IMS/VoLTE, not Global Titles, SS7 or Diameter. During ordinary call setup, SIP/IMS signaling exposed access-network information that could identify the serving mobile cell.
In the earlier technical write-up, the relevant SIP header looked like this:
P-Access-Network-Info: 3GPP-E-UTRAN-FDD;
utran-cell-id-3gpp=242020af11e8b115
P-Access-Network-Info is useful inside trusted IMS/VoLTE domains because it describes the access network used by the device. But if that information is exposed beyond the correct trust boundary, it becomes location-sensitive metadata.
That is why Telia belongs in this wider story. It shows that location leakage is not only a legacy SS7 problem or a covert surveillance vendor problem. Similar failures can appear higher up in modern voice infrastructure, through ordinary SIP/IMS signaling.
For the detailed Telia analysis, see the earlier write-up: Telia location data leaked through telecom signaling.
The three cases, side by side
Ofcom / Global Titles
Main technology: Global Titles / operator signaling identities
What happened: The UK moved to restrict the leasing of Global Titles because they can give third parties access to sensitive signaling capabilities.
When: New leasing was banned from 22 April 2025. Existing arrangements were phased out by 22 April 2026, with narrow exceptions until 22 October 2026.
Severity: High. The issue concerns access to trusted telecom signaling paths that can be used to obtain sensitive network information.
Citizen Lab
Main technology: SS7 / Diameter / telecom interconnect
What happened: Citizen Lab documented two sophisticated telecom surveillance campaigns linked to mobile operator signaling infrastructure.
When: Report published 23 April 2026. Citizen Lab says such operations can persist undetected for years.
Severity: High. The activity involves covert location tracking through infrastructure that users cannot inspect or control.
Telia Norway
Main technology: SIP / IMS / VoLTE
What happened: P-Access-Network-Info exposed cell-level access-network information during ordinary call setup.
When: Recently disclosed in Norway and subject to regulatory attention.
Severity: Serious as a privacy and trust-boundary issue, especially because it involved ordinary call setup rather than malware or account compromise.
Ofcom and Citizen Lab are the primary international developments. Telia is included as a separate SIP/IMS case because it shows the same trust-boundary pattern in ordinary call setup. The protocols differ, but the common denominator is telecom signaling trust.
Technical comparison
Global Titles
Normal purpose: Identify signaling endpoints and help route messages across mobile networks.
Failure mode: Leasing or sub-allocation can allow third parties to appear inside trusted signaling paths.
Risk: Interception, diversion, access to network-held information and possible location tracking.
SS7
Normal purpose: Legacy signaling for roaming, subscriber lookup and service routing.
Failure mode: Trust-based signaling can be abused if unauthorized or poorly governed actors gain access.
Risk: Subscriber location queries, message interception and routing manipulation.
Diameter
Normal purpose: Modern signaling for authentication, policy, roaming and subscriber functions in 4G and parts of 5G.
Failure mode: More modern than SS7, but still dependent on inter-operator trust and correct filtering.
Risk: Location tracking or subscriber information exposure if signaling trust assumptions fail.
SIP / IMS / VoLTE
Normal purpose: Set up and manage IP-based voice sessions in modern mobile networks.
Failure mode: SIP headers may carry access-network context beyond the domain where it belongs.
Risk: Cell-level location context may leak during ordinary call setup.
The shared failure pattern
The shared failure is not one protocol or one vendor.
It is the assumption that certain signaling messages are safe because they are exchanged inside a supposedly trusted telecom environment.
That assumption breaks down when signaling identities can be leased or indirectly controlled by third parties, interconnect paths are not sufficiently governed, firewalls fail to remove sensitive context, or network-internal metadata crosses into the wrong operational domain.
The result is a class of privacy failures that can happen below the phone, below the app layer and below ordinary user awareness.
Why cross-operator boundaries matter
The most important technical question is not only whether location-sensitive metadata exists.
It is how far that metadata can travel.
A mobile core network may need to know which radio cell a phone is attached to. An IMS system may need access-network context to set up a call. A roaming partner may need routing information.
But not every party in the chain should receive everything.
Operator boundary
Does sensitive metadata stay inside the originating operator, or can it cross into another operator’s systems?
MVNO boundary
Can host-network signaling context become visible through MVNO, MVNA or MVNE relationships?
Interconnect boundary
Can signaling data pass through roaming hubs, SMS platforms, transit providers or private operator paths?
Protocol boundary
Can legacy signaling assumptions survive inside modern 4G, 5G, IMS or VoLTE infrastructure?
A header or signaling query visible only inside one tightly controlled network is one class of problem. A signal that can cross into another operator, partner, reseller, interconnect provider or enterprise environment is a much larger issue.
The question is not only whether a field or query is technically valid. The question is whether it crosses a boundary where it no longer belongs.
Why this is a governance issue, not just a bug
Telecom security has traditionally focused on availability, fraud, billing abuse, routing integrity and lawful intercept compliance. The cases discussed here sit in a different category: a service can work normally, no account needs to be compromised, no malware needs to exist and no user may notice anything — while sensitive location context still leaks or is obtained through signaling.
That is why this class of issue can persist for years. It lives in the gap between protocol correctness and privacy expectations. From a network perspective, the system may appear to behave as designed; from a user privacy perspective, the result can still be unacceptable.
The real question is not only whether a specific field, identifier or signaling path is technically allowed. The real question is whether network-internal metadata crossed a boundary where it no longer belonged.
Severity assessment
This should be treated as a high-severity telecom security and privacy issue, but with careful wording.
It is not the same as saying every mobile user is actively being tracked. It is also not the same as saying every operator is compromised. The severity comes from the layer involved: telecom signaling is foundational infrastructure, globally interconnected and mostly invisible to users.
User visibility
Low. The user may see no app permission prompt, no malware warning and no obvious device behavior.
Technical detectability
Difficult. Suspicious signaling can blend into large volumes of legitimate roaming and interconnect traffic.
Potential impact
High. The affected data can include location-sensitive network context and subscriber-related information.
Likely targeting
Targeted rather than mass-market, based on the public Citizen Lab description. High-profile individuals are the most likely targets.
The safest summary is this:
This is not a consumer app privacy bug. It is a telecom infrastructure trust problem.
What operators and regulators should ask
The immediate lesson is not simply “patch one system”. Operators and regulators need to ask hard questions about signaling access, filtering and accountability.
- Who can send signaling messages into the network?
- Which third parties have access to signaling identifiers or interconnect paths?
- Are leased or delegated signaling identities still in use?
- Are signaling firewalls checking for privacy-sensitive queries, not only fraud or abuse?
- Are SIP/IMS headers stripped or rewritten at the correct trust boundaries?
- Are SS7 and Diameter controls tested against real-world tracking scenarios?
- Can operators audit whether location-sensitive signaling requests came from legitimate operational need?
- Are MVNO, roaming, SMS and enterprise messaging arrangements included in the threat model?
The most important shift is conceptual: telecom signaling should not be treated as automatically trustworthy just because it comes through a telecom path.
What we should not claim
There are important limits here.
Telia has not been shown to be part of the Citizen Lab campaigns. Telia’s issue has not been shown to involve Global Titles. The Telia exposure has not been publicly documented as working across international networks, roaming partners or foreign operators. Citizen Lab has also not publicly identified the commercial surveillance vendors with certainty.
The strongest claim is also the most accurate:
The connection is architectural, not evidentiary.
Ofcom, Citizen Lab and Telia are connected because they expose weaknesses in telecom signaling trust boundaries. They do not document the same incident.
The core takeaway
The Ofcom and Citizen Lab cases show that mobile-network location privacy remains vulnerable at the signaling layer. The Telia Norway case adds a concrete SIP/IMS example from modern voice infrastructure.
Together, they point to one conclusion:
Mobile-network privacy cannot be assessed only at the app layer. Some of the most important privacy failures happen inside the signaling layer — below the screen, below user consent and often below ordinary monitoring.
The protocols differ. The business models differ. The incidents differ.
But the trust problem is the same.
Mobile networks still exchange too much sensitive context through systems built on assumptions of trust that no longer match the scale, complexity and commercial reality of today’s telecom ecosystem.
Key points
- Ofcom is restricting Global Title leasing because third-party access to telecom signaling identities can be abused.
- Citizen Lab documented covert location tracking through global telecom signaling infrastructure.
- The suspected actors behind the Citizen Lab campaigns are commercial surveillance vendors, but they have not been publicly attributed with certainty.
- The Telia Norway case is separate, but technically relevant: it shows how SIP/IMS call setup can also expose location-sensitive access-network metadata.
- The shared problem is telecom signaling trust: sensitive network context crossing boundaries where it does not belong.
Sources
- Ofcom: action to crack down on exploitation of mobile networks — Ofcom’s announcement on Global Titles and mobile-network security.
- Ofcom: Global Titles and mobile network security — consultation and implementation details for the Global Titles restrictions.
- Ofcom statement PDF: Global Titles and Mobile Network Security — regulatory statement including implementation dates, transition periods and reasoning.
- Citizen Lab: uncovering global telecom exploitation by covert surveillance actors — research report on telecom signaling abuse and covert location tracking.
- Citizen Lab: Bad Connection FAQ — plain-language explanation of the telecom signaling risks discussed in the report.
- Reuters: UK regulator closes loophole that allowed rogue companies to track phone users’ location — news coverage connecting Ofcom’s action with the broader telecom surveillance context.
- Telia location data leaked through telecom signaling — earlier technical write-up of the Telia Norway SIP/IMS case and the
P-Access-Network-Infoheader example.