Iran’s New Internet Censorship and Digital Surveillance Architecture
Three Signals of a New Phase of Censorship in the Islamic Republic of Iran: The Amnafzar Document, the NAT Debate, and the SNI Spoofing Wave


In recent weeks, three seemingly separate debates have gained momentum at the same time among technical users, internet researchers, and censorship monitors focused on the Islamic Republic of Iran: the publication of a document attributed to the research team of Rasoul Jalili and the technical group of Amnafzar Gostar Sharif, the rumor or hypothesis that parts of users’ outbound internet connectivity have been moved behind NAT or centralized gateways, and the spread of a tool known in the Persian language technical community as SNI Spoofing. At first glance, these three issues appear unrelated.
The first is a policy and technical document about the “controlled restoration” of internet access. The second is a debate about address translation, traffic routing, and packet visibility. The third is a method for misleading DPI systems by manipulating the initial TCP and TLS packets. Read together, however, they offer a more precise picture of the current situation: the Islamic Republic is moving toward a model in which the internet is no longer treated as a public service, but as a classified environment that can be authenticated, monitored, rolled back, and attributed to individual users.
The common thread across these three cases is that the main battlefield of censorship has shifted from “blocking websites” through blacklists to “controlling routes and sessions” through whitelists. In this model, the question is no longer simply whether a domain or IP address is blocked. The questions are which network the user is connecting from, whether the user’s identity has been verified, whether the destination is on an approved list, whether DNS traffic passes through the national route, whether the TCP and TLS flow can be identified in its first few packets, and whether, if the user takes an unauthorized path, that activity can be traced back to an individual or an organization.
The Document’s Model of Controlled, Phased Restoration
The document attributed to Amnafzar Gostar Sharif, if independently authenticated, is the clearest articulation of this logic. Titled “Technical Report on Critical Keys in the Design and Implementation of Smart and Sustainable Internet Censorship,” the document identifies its author as the “research team of Dr. Rasoul Jalili, technical group of Amnafzar Gostar Sharif,” and lists the Secretariat of the Supreme Council of Cyberspace and the technical deputy of the Telecommunication Infrastructure Company as recipients of copies. In its abstract, the document refers to a “phased and fully controlled restoration” of access, and proposes a four stage model: easing restrictions on national services, whitelisting critical websites, controlled restoration of access to artificial intelligence platforms, and then limited reopening of network protocols.

The significance of this document does not lie merely in the fact that it lists a set of technical measures. Its importance lies in the language and operational logic it uses. In this text, “restoration” does not mean the return of a free internet. It means the limited, phased, and reversible opening of routes that have already been prepared for DPI, identity verification, traffic signatures, mandatory DNS routing, and user identification. The document refers to the deployment of third generation DPI, behavioral and payload based signatures, limits on file sharing in domestic messaging apps, the blocking of DoH and DoT, the blocking of outbound IPv6, UDP, and ICMP, and even the design of a dedicated signature for WARP. Elsewhere, it proposes that critical websites should first be reopened on mobile networks so that, during the first 48 hours, there is “more precise monitoring” and, if needed, the ability to “identify the offending individual.” This is the point at which the document ceases to be merely technical and becomes political.
Moving Traffic Behind Centralized Gateways
Alongside this document, reports also emerged about “Internet Pro,” or restricted access for selected groups. In this context, the NAT debate becomes significant. In recent days, some users in technical social media circles have raised the hypothesis that parts of users’ outbound connectivity have been moved behind NAT or centralized gateways. In the general sense, NAT, or Network Address Translation, is not an unusual technology. Operators have long used it to place multiple users behind a single public address, or a pool of public addresses, in order to manage IPv4 scarcity.
There has been debate over the accuracy and precision of this claim (which we will examine in the next section. RFC 3022, the classic NAT specification, explains that NAT translates internal addresses into external addresses, and that in the case of NAPT, the translation also includes TCP and UDP ports. The same document states that for TCP and UDP packets, NAT changes require checksum updates, because the checksum in these protocols also covers a pseudo header that includes the source and destination addresses.
The technical part of the NAT claim, therefore, has a valid core: if traffic passes through NAT or an intermediate gateway, some packet fields, including the address, port, and checksum, may be changed or recalculated. This can disrupt some filtering detection or circumvention methods that rely on directly observing packet behavior from inside the network to the outside. But this fact alone does not justify the immediate conclusion that “the entire internet in Iran has been moved behind NAT,” or that a single nationwide structure has been activated for all users. Such a claim would require independent data, network measurements, comparative traceroutes, TTL analysis, observed routing changes across multiple operators, RIPE Atlas data, or verifiable packet captures.
The more important point is that if NAT is being discussed here as part of censorship, it is no longer merely a tool for conserving IP addresses. NAT can become part of a route control chain: a chain in which user traffic passes through centralized gateways, policy engines, DPI systems, and logging infrastructure. In this context, it is necessary to distinguish between simple NAT, operator level CGNAT, transparent proxies, TCP proxies, and stateful middleboxes. If the intermediate device only translates addresses and ports, we are dealing with NAT or CGNAT. But if it terminates the TCP session, reconstructs the flow, inspects the payload, or normalizes protocol behavior, then the issue is no longer simple NAT. It is a session control system.
This is where the NAT debate connects directly to the Amnafzar document. The document attributed to Amnafzar refers to the complete blocking of outbound IPv6, UDP, and ICMP, the use of DPI, mandatory routing of DNS to a national resolver, and granular reopening based on FQDN, IP prefix, and port. Implementing such a model requires more than destination based filtering. It requires route control, state maintenance, behavioral packet analysis, and policy enforcement at network gateways. Put simply, if tiered internet access is to be implemented at national scale, there must be a point in the network that decides who can connect, to where, through which protocol, at what time, and under what level of monitoring.
The SNI Spoofing Technique
The third piece of this puzzle is the SNI Spoofing wave. A GitHub project named patterniha/SNI-Spoofing describes itself with the phrase: “Bypass DPI with IP/TCP Header manipulation.”
To understand this method, it is necessary to look at the role of SNI in TLS. SNI, or Server Name Indication, is a TLS extension that allows the client to declare the intended domain name in the ClientHello, so that the server can select the appropriate certificate or security policy. RFC 6066 explains that a server can use the server_name extension in the ClientHello to choose a certificate or other aspects of the security policy. This same feature is also important for filtering, because in traditional TLS, SNI is usually visible before the contents of the connection are fully encrypted, allowing a DPI system to decide whether to block or allow the connection.
The method known as SNI Spoofing attacks precisely this point. The client first establishes a TCP connection with the destination. It then sends a fake TLS ClientHello containing the SNI of an allowed or whitelisted domain, but deliberately sets an invalid TCP sequence number for the packet. If the DPI system sitting on the path does not accurately track TCP state, it may see this packet and mark the flow as an allowed connection. The real server, however, will not accept the packet because its sequence number falls outside the valid TCP receive window. The client then sends the real ClientHello within the same connection, this time with a valid sequence number and the SNI of the actual destination. If the filtering system does not inspect the flow again after seeing the first packet, the real connection is established.
The role of sequence numbers in TCP is fundamental. RFC 9293 explains that a TCP implementation, when processing incoming data, must check whether a received segment falls within the valid sequence space and receive window. The same document states that segments whose sequence numbers fall completely outside the receive window are treated as duplicates or injection attacks and discarded. This is why, in this method, the fake packet is visible to DPI but invalid for the real server. The deception is not aimed at the server. It is aimed at the filtering system.
This distinction is important: the term SNI Spoofing can be misleading. In this case, the real SNI is not being spoofed at the server level. The real server does not accept the fake packet. A more accurate name for the method is “fake ClientHello injection with an allowed SNI and an invalid TCP sequence number.” In more media friendly language, it is “deceiving DPI with a fake SNI in a packet that the real server will not accept.”
This method also shows why the censor is pushed toward stateful DPI, more precise state maintenance, and even more complex proxy systems. If DPI only sees the first visible ClientHello and makes its decision based on that packet, such methods can deceive it. But if the filtering system validates sequence numbers, tracks the TCP receive window, observes the ServerHello, and delays its final decision until the real state of the connection is clear, the method becomes harder to use. That said, doing this requires more computation, memory, complexity, and introduces a greater risk of error. For this reason, countering SNI Spoofing is not just a small technical adjustment. It is a sign of the shift from lighter filtering toward deeper session control.
From DPI to Transparent Proxy: A Possible Response to Protocol Abuse
The timeline matters here. During the twelve day war, internet shutdowns in Iran no longer looked like classic BGP level blackouts alone. The pattern of disruption already suggested a more engineered and preplanned model, one designed not merely to withdraw routes or sever external connectivity, but to separate access tiers, preserve selected domestic or approved pathways, and control outbound flows with greater precision.
Within this framework, the DPI based model appeared to work, at least from the state’s point of view, for a period of time. External access was restricted, selected paths remained available, and filtering systems could block or allow flows based on domain names, protocols, traffic signatures, or the early behavior of a connection. The pressure point emerged when circumvention tools began targeting not only destinations, but protocol assumptions themselves. The method known as SNI Spoofing was a clear example: by injecting a fake ClientHello with an invalid TCP sequence number, it tried to exploit the gap between what DPI could observe and what the real server would accept as valid TCP traffic.
If the hypothesis that NAT or centralized gateways were activated after the SNI Spoofing wave is correct, the more precise explanation may be that parts of the traffic were moved not merely behind simple NAT, but behind transparent proxies or stateful middleboxes. The distinction is critical. Ordinary NAT translates addresses and ports. A transparent proxy, by contrast, can terminate the user’s TCP connection and then establish a new connection to the real destination on the user’s behalf. In that situation, the user no longer has a direct end to end TCP connection with the destination server. The connection is stopped, normalized, and reconstructed by a middlebox. The practical effect is that techniques relying on violations of TCP behavior, invalid sequence numbers, or packets designed to be visible to DPI but rejected by the real server can be neutralized in the short term, because the middlebox intercepts and rewrites the session before it reaches the destination.
The Mechanism of Internet Censorship and Surveillance in the Islamic Republic of Iran
In the older model of censorship, the central question was simple: is this website blocked or accessible? In the new model, the questions multiply: does this user belong to an approved tier? Is the connection coming from a mobile network or a fixed line? Has DNS passed through a national resolver? Is the destination on the critical services list? Does the SNI match the real destination? Has the checksum been rewritten after NAT? Was the sequence number seen by the DPI system valid for the real server? Was the flow marked as legitimate after the first ClientHello, or was it inspected again?
From a digital rights perspective, this shift is more dangerous than classical filtering. Classical filtering blocked access; the new model makes access conditional. A selected user, company, or institution may be allowed to connect slightly more freely, but that expanded access also places them inside a system of identity verification, logging, and attribution. The general public remains behind broad restrictions, while approved groups use the internet through limited, controllable, and reversible access. This is what policy language may call “controlled restoration,” but in practice it represents a more advanced form of infrastructural discrimination.
In such a model, the internet is not reopened; it is engineered. The gradual return of certain services, the activation of special packages, limited access to app stores or work tools, and the provision of specific IP addresses to selected businesses should not be mistaken for the lifting of filtering. They may instead be components of a new architecture: an internet in which a different route is defined for each user tier, each type of service, and each level of trust.
Taken together, these three cases point to one clear conclusion: the Islamic Republic is moving from destination based censorship toward session based control. The document attributed to Amnafzar shows the planning language behind this transition. The NAT and centralized gateway debate highlights the possible infrastructure for route control. SNI Spoofing, meanwhile, shows where users’ technical resistance strikes at this architecture: in the first few packets, precisely where DPI tries to decide whether a connection is legitimate.
For this reason, the news value of these three issues does not lie merely in the exposure of a document, a technical rumor, or a GitHub tool. Their value lies in the way they fit together. These three signals suggest that the internet in the Islamic Republic of Iran has entered a phase in which access, identity, route, protocol, and attribution are increasingly bound together. At this stage, every “restoration” of access should be judged by a sharper set of questions: who is being connected, through which route, under what level of monitoring, and with what capacity for identification?