This is Ryan Barnett's Typepad Profile.
Join Typepad and start following Ryan Barnett's activity
Join Now!
Already a member? Sign In
Ryan Barnett
Recent Activity
We are looking into options for adding hmac protection to Cookie data. The trick here is that the data leaving the web app in the Set-Cookie response header is not exactly the same as the data returned in request Cookie headers. Look at this example Set-Cookie from Google - Set-Cookie: PREF=ID=45f40e8097a0ef03:FF=0:TM=1391003789:LM=1391003789:S=dHIbLYQBaCTU01tL; expires=Fri, 29-Jan-2016 13:56:29 GMT; path=/; Out of this data, only the "PREF" Cookie data would be sent back in subsequent requests. The expires, path and domain Set-Cookie elements instruct the browser what to do with the Cookie data but it is not included within the response. The end result is that we can not simply hash the entire Set-Cookie header like we can do with HTML elements. We would need to only hash the first Cookie data section. This is where we are researching.
Rishi - I would suggest sending an email to the ModSecurity community mail-list -
@KEvin Climx - Agreed. See this blog post -
If you are are using the OWASP ModSecurity CRS the setup config file checks for those proxy headers - You could update the rules to check TX:REAL_IP instead of REMOTE_ADDR.
I agree that injecting XSS code into request headers such as User-Agent is not new. The purpose of this post is to highlight that attackers are still using it and that security vendors must take care to prevent this type of attack if viewing attack data in a web-based console (reference: Preventing this type of attack is one reason why Trustwave's WebDefend WAF does not use a web-based admin console.
Ryan Barnett is now following ModSecurity Admin
Nov 19, 2012
AJ - please use the ModSecurity users mail-list to discuss compiling issues:
@lotek - No one detection technique is adequate to combat today's web attacks. The value of this concept lies within the collaboration between blacklist filtering and Bayesian analysis. As the attackers are fine tuning their attack payloads to bypass the blacklist filters, they are also training the SPAM classifiers. This makes the Bayes classifier better able to identify the attack payload that evades the RegEx. Keep in mind that the real goal of all of this defensive stuff is two-fold: 1) To raise the bar of compromise - meaning that when you consider Time-base-Security metrics, we want to make a successful evasion take significantly longer than with RegEx alone, and 2) This increased amount of time allows Defenders and Incident Response personnel time to react. This may be to virtually patch a previously unknown vulnerability or to take other action against the attacker. The goal of this concept is not to be 100% evasion proof.
You are correct in that there was the "+" potential bypass, however ModSecurity handles that with the urlDecodeUni and removeWhitespace transformation functions. Here is how the debug log looks when I send this attack with the "+" sign in it - GET /?+-d+auto_prepend_file= HTTP/1.0 Debug log: Recipe: Invoking rule b7fddfc8; [file "/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "1"]. Rule b7fddfc8: SecRule "QUERY_STRING" "@rx ^-[sdcr]" "phase:1,t:none,t:urlDecodeUni,t:removeWhitespace,block,log,msg:'Potential PHP-CGI Exploit Attempt'" T (0) urlDecodeUni: " -d auto_prepend_file=" T (0) removeWhitespace: "-dauto_prepend_file=" Transformation completed in 13 usec. Executing operator "rx" with param "^-[sdcr]" against QUERY_STRING. Target value: "-dauto_prepend_file=" Operator completed in 7 usec. Warning. Pattern match "^-[sdcr]" at QUERY_STRING. [file "/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "1"] [msg "Potential PHP-CGI Exploit Attempt"]
@Lamar - ha, I am working on a blog post right now on the same type of traffic. We are seeing it too.
@Lamar - I do agree with your general point about needing to upgrade and patch Apache, however I believe that these attackers are probably breaking in through well-known application flaws rather than Apache itself. For example, on of the IP addresses you listed in your blog post is running WordPress - hxxp:// If you read some of my recent honeypot blog posts, there are a number of WordPress vulns that are being targeted - and
@Michael - Good catch. Fixed, thanks.
We had our final Level I Winner for the HP Free Bank site - Alexander Zaitsev! We have had over 650 participants in this challenge so far. Can anyone achieve Level II status and find an SQL Injection Evasion???
Status Update - We have had 3 Level I Winners so far: - IBM Testfire: Yuriy Goltsev - Cenzic CrackMe Bank: Ahmad Maulana - Acunetix Acuart: Travis Lee The HP Free Bank demo is still open! Good luck :)
@Mustafa - proper removal of malicious code is highly dependent upon how the code is being included within your site. For instance, if the attack vector is Malvertising, then the malicious code isn't even on your site but an affiliate's site. There are also reports of compromising of PHP conf files and different plugin software for apps like WordPress. So, there is no easy remediation response. There is a another new capability within ModSecurity v2.6 - content manipulation where you can actually alter inbound/outbound body content. The new operator is called @rsub - It allows you to edit live HTTP streams so it would be possible to create a new rule that would strip out this malicious code in the interim while you track down the infection vector. @Timothy - fixed typo, should be "validation-based system"
@Sebastiaan Jansen - the "body=30" directive was just an example. Each site would need to adjust properly to allow for various uploads. Ideally what should be done is for the Apache Software Foundation (ASF) to update the mod_reqtimeout code and expand its coverage so that it could be defined within different Apache score locations (such as Directory, etc...) which would allow for specifying different thresholds per resource. @Eugene Nelen - This ruleset assumes that you are using the OWASP ModSecurity Core Rule Set - In the modsecurity_crs_10_config.conf file, it properly initiates the IP collection which then allows other rules to add/increment variables.
Fixed the typo thanks - the correct Apache directive name is RequestReadTimeout
@sasha - as Stefano pointed out, the concept here is with a distributed architecture where modsecurity is running on a front-end, reverse proxy server. In this setup, we can export our anomaly score data and add it the request headers on the back-end of the connection when Apache proxies the request to the destination web server. Also as Stefano mentioned, if we see any existing WAF request header data on the front-end of the request, then we can block it as it is obviously spoofed. @Stefano - I agree that this concept is a bit ahead of its time. Still, web applications already often inspect other request header data to make security decisions (Cookies, SessionIDs, etc...). This concept fits in perfectly with OWASP AppSensor and ESAPI. @Marc - I understand your viewpoint. WAFs are of greatest value when organizations either don't have access to the source code (or 3rd party apps/plugins) and/or when the cost to fix the issue in the code is deemed too high from a business perspective. From a complexity standpoint, yes this does add a layer however there is a flip-side as well and that is scaled security protections. Most organizations don't have a homogenous web architecture and thus can't leverage secure code reuse. So, by externally implementing some of these security protections, you can do this regardless of web language/platform.
@Jinxy - The @validateByteRange operator tells ModSecurity *how* to process the payload data. In this case, it is restricting which byte characters are allowed within your site. As you have stated however, if your site requires support other languages (using UTF8 encoding) then you will need to keep this setting pretty wide. @Thomas - In general checking the Referer request header is prone to false positives as you really have no control over what other people are doing on their sites and this ends up being a "guilt by association" problem when you click on links to a site. What I would recommend is that you update the CRS with a new rule in the modsecurity_crs_48_local_exceptions.conf file where you can evaluate the matched rule and if it was found in the Referer variable then just cancel the TX match. Take a look at the 48 file for some examples.