Learning never exhausts the mind
Home >  Technology > Privacy & Security > Parameter Tampering and How to Protect Against It

Published 10th July 2016 by

Parameter tampering is a method by which malicious hackers attempt to compromise your application through manipulating parameters in the URL string. This can cause applications to perform in ways the programmer did not intend, especially if invalid data is encountered.
Internet Security 101 Series
  1. Introduction to Hacking
  2. History of Cryptography
  3. Online Privacy And Why It Matters
  4. Supercookies: The Web's Latest Tracking Device
  5. Ultimate Guide to SSL for the Newbie
  6. How Internet Security and SSL Works to Secure the Internet
  7. Man in the Middle Hacking and Transport Layer Protection
  8. Social Engineering
  9. Cookie Security and Session Hijacking
  10. What is Cross Site Scripting? (XSS)
  11. What is Internal Implementation Disclosure?
  12. Parameter Tampering and How to Protect Against It
  13. What are SQL Injection Attacks?
  14. Protection Against Cross Site Attacks

Parameter tampering focuses on a vulnerability in the way an application handles untrusted data. This can be from things like failure to check data integrity, malicious intent, SQL injection, cross site scripting or even binaries containing malware.

Parameter tampering is merely changing the value of a GET or POST variable in the URL address bar by means other than normal application usage. The untrusted data can also can also come from the request headers or from cookies, so there are a number of attack vectors which must be addressed.

Request Headers

We've seen request headers before so this shouldn't be unfamiliar. Looking at the data we can see that there are at least a dozen areas in which data can be manipulated, so lets have a look at a sample header and the possible areas that may become compromised.

GET https://timtrott.co.uk/ HTTP/1.1 
Host: timtrott.co.uk 
Connection: keep-alive 
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 
Upgrade-Insecure-Requests: 1 
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36 
Accept-Encoding: gzip, deflate, sdch 
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 

So on the first line, this is a GET request. Typically a GET request is a request for a page given in the URL. Any parameters to be passed to the application are contained in the URL. The fact that this request is using HTTPS does not mean it is trusted, nor does it protect the application from malicious requests. It only means the the communication between client and server is encrypted. We can also see the HTTP protocol version. All of these three things can be changed and if the server isn't setup correctly to handle invalid values, or invalid routes you can run the risk of an internal information disclosure or worse.

The User-Agent is another key field which may be compromised. May applications look at this field to identify the browser being used in order to tailor the site to the browser, for example serving a mobile version of the site to mobile user agents. Some application log this into a database, so care must be taken that it does not contain any malicious SQL injection or cross site scripting attacks. The same goes for the referrer field. Applications commonly log this to identify where people come from to access the site. We can also expect to see cookie information in the header as well, both key and value, both of which can be compromised.

Capturing and Manipulating Parameters

GET parameters are those bits on the URL which follow a question mark, for example ?name=string&variable2=anotherstring These can be changed in the URL address bar by any user. It's quite common for advanced users to to this all the time, so it's not just malicious users we have to watch out for.

A common parameter for database driven applications is the page ID. A website may have a url simiar to http://www.example.com/index.php?page=homepage. Navigating around the site we can see that this changes according to the page being accessed, for example http://www.example.com/index.php?page=about. Now, it doesn't take much of a genius to work out that the page content is dependant of this parameter, and that in a dynamic database driven website this would be transferred to the database, possibly using a query like this: select * from pages where id = 'homepage'. This leads on to a SQL injection attack which we will cover in a later tutorial. Clearly this is a serious risk.

Another type of parameter is the POST parameter. This is similar to the GET parameters we just saw, but they are not contained in the URL, but in the request body (payload). POST parameters are usually the result of a form submission, such as a search box or comment form.

POST parameters are primarily altered using a tool such as Fiddler or POSTMan. The actual steps are the same as GET parameters. The value of the variable is examined for a common pattern then an attack devised.

For example, lets have a look at a rating system. Consider a page which has a voting control on it. When a star is clicked, the value is passed back to the server to register the vote. This is commonly done via AJAX these days, so we may see a request similar to this - index.php?action=vote&user=1001&page=about&rating=4. There are four parameters here which may be open to attack either by passing in invalid data (a rating of 9999 or abcd) or it can result in a change in user id - a vote may be cast for other users. Clearly a risk as well.

Bypassing Validation

A very common pattern is to perform validation on the client side before sending it to the server. For example, a registration form requires a name and an email address. The name is required and the email address must be in a valid format. Now, you could do this entirely on the server, but this causes delays for the user as the page is posted to the server, the server processes the data and the results sent back. In reality, this type of validation is commonly done on the client side using JavaScript. This results in instant feedback for the user without the form data ever touching the server. This is all good for user experience, and there is nothing wrong with this. The problems start when developers rely on client side validation and neglect the server side validation all together.

Through parameter tampering of the POST variables, or simply disabling JavaScript, invalid data can be sent to the server which is now accepted and processed as if it were valid. At best it may cause a type conversion error (string cannot be converted to integer) or at worst it can contain a SQL injection or cross site scripting attack.

Model Binding and Mass Assignment

Model binding is a fairly new and common construct in web programming frameworks. It allows developers to develop faster and easier. It revolves around the concept of a model, basically a representation of an entity containing properties which are used in the logic and displayed to the user.

Here is an example C# class model.

public class UserProfile
  public string Email { get; set; }
  public string FirstName { get; set; }
  public bool IsAdmin { get; set; }
  public string LastName { get; set; }
  public string Password { get; set; }
  public int UserId { get; set; }

A model will commonly link through to a table in a database, maybe even a 1:1 relation with the table and the model.

Model binding is the process by which the application will automatically try and populate a model with data from the payload.

Consider a account settings page in which a user can edit their information. Models are generally linked in with a form, each field in the form is bound to a property on the class. In this example the user can edit their email address, first name and last name. A common construct is to have hidden fields for entity id's, in this case the UserId. This is a hidden text box in the form which is not shown to the user, but is automatically populated with the current users ID and sent back to the server when the form is submitted.

Hopefully you can start to see some problems with this. Firstly, what happens if the the UserID is tampered with? Can we change the name and email address of another user? Is the email address validated?

The main risk however is due to the fact that model binding is an automatic process in most frameworks. That means that any POST parameter matching a model property is auto mapped to the entity. If this entity is then written back to the database during an update without proper validation there could be serious consequences. We'll see an example of this in just a second.

Mass Assignment Attacks

A mass assignment attack is a type of parameter tampering where a bunch of fields on the model are assigned to from the POST parameters. In the example above, a legitimate payload may look like

POST /index.php

Now this can be manipulated quite easily to form a mass assignment attack. It will take one of two forms. Either an attacker has created a fingerprint of your server and software through internal implementation disclosure and knows what platform you are running on and can tailor the attack, or it will be a brute force style attack.

If you are running on an open source platform, such as WordPress, Joomla, Drupal, Umbraco et al, then all the attacker has to do is look at the source code for the models. A brute force attack literally bombards the server with requests.

A malicious hacker would then send a payload with a crafted variable to assign a value to the model to bypass the security. In this example, making the user an admin.

POST /index.php

Through the wonders of automagic, this value is then parsed into the model, which is then persisted through to the database and voila, the user is now an admin.

This can be rectified by not using a view model which is the same as the database entity. The view model should only contain the fields which are actually in use, and the values should be inspected prior to any database updates.

Fuzz Testing

Fuzz testing is an automated tool which brute force attacks specific payloads with a dictionary of patterns to find vulnerabilities. It does exactly the same as manual parameter tampering, however it is an automated process testing multiple combinations of each parameter with multiple patterns and it analyses the results. The tool can scan for cross site scripting, SQL injection and directory traversal (patterns which can potentially access system files).

There are many tools available for this, including the OWASP Zed Attack Proxy. Just here though we are going to use the very simple addon to Fiddler called Intruder21.

What we are going to do here is perform a search, capture the request in Fiddler then run Intruder21 over the request to see if it can detect anything. Fire up Fiddler, make sure it is capturing traffic then make a request to a site and perform a search. In Fiddler, locate the request to the search, right click and select Send to Intruder21.

In this window you can see all the parameters in the request. It sometime highlights properties which it thinks could be manipulated, but it doesn't always capture them all. I'm only interested in the search for now, so clear the results, locate the search term in the url or request body, highlight it and click Add Tag.

Fuzz Testing with Intruder21

Fuzz Testing with Intruder21

Now, in the payloads tab you can see all the different values the program will try and substitute in for the search term. Basically it will go though each payload and search for that term. It will then analyse the results to see if they match expected behaviour.

Click on the results tab, then start test to begin. After a while it will come back with the results.

Fuzz Testing with Intruder21

Fuzz Testing with Intruder21

It looks like my website was ok, but if there were any orange or red lines that means that Intruder21 has identified possible attack vectors which can be used and should be addressed immediately.

Key Points to take away

You must assume that all aspects of HTTP request can and will be manipulated by attackers. The verb, path, protocol, accept headers, user agent strings, referrers, accept language, cookies and the request body are all untrusted data.

Don't rely on controls which depend on the browser - don't depend on client side validation.

Be conscious of where risks might be present in automated processes such as model binding and mass assignment attacks.

Consider which verbs should be allowed for a resource and block the others.

Fuzz test any and all properties where an attacker may attempt to gain access.

Tutorial Series

This post is part of the series Internet Security 101. Use the links below to advance to the next tutorial in the couse, or go back and see the previous in the tutorial series.

Leave a Reply

Fields marked with * are mandatory.

We respect your privacy, and will not make your email public. Hashed email address may be checked against Gravatar service to retrieve avatars. This site uses Akismet to reduce spam. Learn how your comment data is processed.