How to Get SQL Server Security Horribly Wrong

Posted: 31/01/2019 in Uncategorized
Tags: , ,

It is no good doing some or most of the aspects of SQL Server security right. You have to get them all right because any effective penetration of your security is likely to spell disaster. If you fail in any of the ways listed below, then you can’t assume that your data is secure, and things are likely to go horribly wrong.

Failure #1: Not securing the physical environment

A database team might go through extraordinary measures to protect its SQL Server instances and the data they contain, but not necessarily extend that diligence to the physical environment, often because the assets themselves are not in the team’s control. Yet physical devices-such as servers, backup drives, or disks in a storage area network (SAN)-can be compromised just like any software or network component, so whoever is in charge of those devices better take heed.

All it takes is one disgruntled employee with access to the physical assets to turn an organization upside down. Before anyone knows what happens, the individual can walk off with a backup drive full of credit card numbers, health records, company secrets, or other sensitive information, resulting in permanent damage to the organization’s reputation and financial well-being. Even if the data is encrypted, an employee with the necessary privileges, access to the security certificates, and a bit of know-how can make full use of that data on the open market.

Organizations serious about data security must implement defence in depth strategies that protect all layers of the data infrastructure, including the physical components, whether from inside threats or those coming from outside the organization. Every server and disk that hosts a SQL Server instance or data file must be physically protected so that only those who absolutely need access to those devices can get at them. That includes devices used for replications, mirrored databases, backups, or log files. The number of workers who have access to the physical devices should be strictly limited, with security measures scrutinized and updated regularly.

Another challenge, perhaps even more difficult to address, is trying to “enlighten” those wayward employees with privileged access who are careless with their desktops and laptops, leaving them unattended and unlocked for varying lengths of time. Anyone with less than honourable intentions can stroll along and change or copy sensitive data with a little bit of insight into the systems, not only making the careless user look bad, but again putting the company’s reputation and financial happiness on the line. If the imprudent user is a creature of habit, the miscreant thief has an even easier time getting at the sensitive data. Given the possibility of such a scenario, the only hope for the organization is employee education and IT policies that try to mitigate such risks as much as possible.

Failure #2: Not protecting the server environments

SQL Server can spread across environments far and wide, its impact reaching into machines and folders on the other side of the known network. Installation files might be located on a nearby server, log files on another device, data files on the new SAN down the hall, and backups in a remote data centre. Even within the host, SQL Server also makes use of the operating system (OS) files and the registry, spreading its impact even further.

Anything SQL Server touches should be protected to minimize the potential for a SQL Server instance or its data being compromised. For example, a database might contain sensitive data such as patient health records. Some of that data can end up in the log files at any given time, depending on the current operations. Even if all other files are carefully protected, a less-then-secure log file can have far-reaching implications if the network or attached devices are compromised.

Database and IT teams should take every precaution to protect the files and folders that support SQL Server operations, including OS files, SQL Server installation files, certificate stores, data and log files everything. Administrators should not assign elevated privileges to any related folder and should not share any of those folders on the network. Access to the files and folders should be limited to the greatest degree possible, granting access only to those accounts that need it. Teams should always keep in mind the principles of least privilege when it comes to NTFS permissions, that is, granting access only to the degree absolutely necessary and nothing more. The same goes for the machine as a whole. Don’t assign administrative access to a server when a user requires only limited privileges.

Failure #3: Implementing inadequate network security

When you consider all the ways a network can be hacked, it’s remarkable we can do anything safely. Weak network security can lead to viruses and network attacks and compromised data of unimaginable magnitude. Cyber-criminals able to exploit network flaws are not only able to inflict damage, but also download boatloads of data before being detected, whether credit card information, social security numbers, company strategies, or a host of other types of sensitive data.

An organization must do everything possible to protect its network and the way SQL Server connects with that network. One place to start is to ensure that you’re running the necessary antivirus software on all your systems and that the virus definitions are always up to date. Also be sure that firewalls are enabled on the servers and that the software is current. Firewalls provide network protection at the OS level and help to enforce an organization’s security policies.

In addition, configure SQL Server to use TCP/IP ports other than default one (such as 1433 or 1434). The default ports are well known and are common targets for hackers, viruses, and Trojan horses. In addition, configure each instance to use a static port, rather than allowing the number to be assigned dynamically. You might not be able to ward off all attacks, but this way you’re at least obscuring the pathways.

To further obscure your SQL Server instance, consider disabling the Browser service to prevent snoopers from being able to search for it on the network. Clients can still access the instance as long as their connections specify the correct port number or named pipe. If disabling the Browser service is not an option, at least hide any instances you don’t want easily discovered, once again forcing any client connections to have to provide the port number or named pipe.

In addition, you should disable the network protocols that are not needed. For example, it’s unlikely you need to have both the Named Pipes and TCP/IP protocols enabled. In addition, you should grant the CONNECT permission only the endpoints or logins that require it; for all others, explicitly deny the permission. If possible, avoid exposing the server that’s running SQL Server to the Internet altogether.

Failure #4: Not updating and patching your systems

Some of you might recall what happened in 2003. The SQL Slammer computer worm infected over 75,000 servers running SQL Server. Within 10 minutes of deployment, the worm infected more than 90% of vulnerable computers and took down thousands of databases. Slammer exploited a buffer-overflow vulnerability in SQL Server to carry out an unprecedented denial-of-service (DoS) attack. The rest is cyber history.

The interesting part of this story is that the bug had actually been discovered long before the attack. In fact, Microsoft provided a fix six months in advance of the DoS onslaught. The organizations that got hit by Slammer had failed to patch their SQL Server instances.

Because of the onslaught of threats aimed at SQL Server and other systems, companies like Microsoft are regularly releasing service packs and patches and security fixes and other types of updates to address any newly found vulnerabilities. What these vendors can’t do, however, is force their customers to apply the updates.

In all fairness, there’s a good reason why many DBAs and other administrator types can be slow to getting around to patching their systems. Applying these fixes can be a big deal. They take time and testing and careful consideration. If you apply an update to a production server without first testing it, you risk taking down the entire system. Overworked DBAs must have the time in their schedules and the resources necessary to apply these updates right.

Yet such incidents as the Slammer debacle demonstrate how critical it is to keep your SQL Server instances up to date. Not only can such attacks lead to excessive resource consumption, server downtime, and corrupt data, but also to even more serious concerns. The information gathered from a successful DoS attack can be used to launch subsequent attacks, some of which might be aimed at getting at the actual data.

Microsoft is constantly releasing critical updates and fixes for both Windows and SQL Server. You can manually download and install them, or set up your systems to update the software automatically if you can afford to take such risks. More often than not, you’ll want to first test the update and schedule a time to implement it into a production environment. It is no small matter, certainly, but keeping Windows and SQL Server current is one of the most effective strategies you can employ for protecting your system’s data.

Failure #5: Maintaining a large surface attack area

We’re not talking rocket science here, just common sense. The more SQL Server features and services you have enabled, the larger the surface attack area and the more vulnerable your system is to potential attacks. When setting up SQL Server, you should install only those services you need and disable those features you don’t plan to use anytime soon. Only the services and features for which there is a credible business need should be running.

Cybercriminals are a diligent lot, and the more you give them to work with, the happier they are. Given the proliferation of zero-day exploits, the less you can leave yourself exposed, the better. Why install Analysis Services or Integration Services on a production server when all you need is the database engine?

Some features in particularly can raise the stakes on your system’s vulnerability. For example, xp_cmdshell let’s you execute a command against the Windows host. If that feature is enabled on a SQL Server instance and a hacker gains access, xp_cmdshell can open doors to the OS itself. Other features too should be enabled only if needed, such as the Active Directory Helper service or VSS Writer service. You even have to be cautious with OPENROWSET and OPENDATASOURCE because they can facilitate access to external systems. In addition, you should consider disabling mail-related features that are not needed, such as the Database Mail service or the sp_send_mail system stored procedure. And don’t install sample databases or code on your production servers.

If you’re running SQL Server 2005, you can use the Surface Area Configuration (SAC) tool to enable or disable installed components. For whatever components SAC doesn’t address, you can use Configuration Manager, the sp_configure system stored procedure, or a tool such as Windows Management Instrumentation.

Starting with SQL Server 2008, the Policy-Based Management system replaces SAC, providing one location to address all your surface configuration needs, in addition to configuring other components. For example, the following dialog box shows a new policy named Surface area conditions, with the Remote queries disabled condition selected. The condition specifies that the OPENROWSETand OPENDATASOURCE functions should be disabled.

Whichever tools you use, the important point is to install and enable only those components you need. You can always add or enable other features later. Just don’t give hackers any more ammunition than they already have.

Failure #6: Using improper authentication

SQL Server supports two authentication modes: Windows Authentication and Mixed Mode. Windows Authentication, the default authentication type, leverages Windows local accounts and Active Directory network accounts to facilitate access to the SQL Server instance and its databases. Mixed mode authentication supports both Windows accounts and SQL Server accounts (SQL logins). You can specify the authentication mode when installing SQL Server or change it after SQL Server has been installed by updating the server properties, as shown in the following figure.

Microsoft (and just about everyone else) recommends that you use Windows Authentication whenever possible, falling back on Mixed mode only to support backward compatibility, access to SQL Server instances outside the domain, and legacy applications. Windows Authentication is more secure because it can take advantage of the Windows and Active Directory mechanisms in place to protect user credentials, such as the Kerberos protocol or Windows NT LAN Manager (NTLM). Windows Authentication also uses encrypted messages to authorize access to SQL Server, rather than passing passwords across the network. In addition, because Windows accounts are trusted, SQL Server can take advantage of Active Directory’s group and user account administrative and password management capabilities.

SQL Server accounts and passwords are stored and managed within SQL Server, with passwords being passed across the network to facilitate authentication. Not only does this make the authentication process less secure, but it also means you can’t take advantage of the password controls available to Active Directory. In addition, you must be diligent about managing the passwords within SQL Server, such as mandating strong passwords and expiration dates as well as requiring new logins to change their passwords. You must also guard against accounts being created with blank passwords, including accounts created by third-party products. Weak passwords leave your database open to brute force attacks and other nefarious activities. Blank passwords open up your databases to everyone and everything.

Also with Mixed Mode authentication, you have to be particularly wary of that powerful built-in SA account. At one time, this had actually been created with a blank password. Fortunately, that has not been the case since SQL Server 2005. Many database pros recommend that you rename the SA account. Others suggest you disable it and never use it. Still others would have you do all three, not a bad strategy, given how well known the account is and its history of abuse. At the very least, you should assign a complex password to the account and keep it disabled.

It should also be noted the SQL Server creates the SA account even if Windows Authentication is the selected mode, assigning it a randomly generated password while disabling the account. You might consider applying a complex password in this case as well, and certainly don’t enable the account.

Failure #7: Assigning the wrong service accounts

Each SQL Server service requires a Windows account to run so the service can access resources such as networks and file folders. Because orchestrating this access can be somewhat cumbersome, services are often assigned high-privileged built-in accounts such as Local System or Network Service, thus preventing those mysterious SQL Server error messages that can pop up at peculiar moments, providing information that is so obscure that even the Google gods cannot offer an answer.

The problem with using these accounts is that, if SQL Server were to be compromised, the OS too could be compromised-to the degree that the service account has access. These built-in accounts can actually inherit elevated privileges in Active Directory, which are not required in SQL Server. For example, you can easily assign the Local System account to a SQL Server service, as shown in the following figure.

The Local System account is a very high-privileged account with extensive access to the local system. It is like granting a user administrative privileges to the server. If SQL Server were to be compromised, the would-be hacker could potentially have unrestricted access to the machine and its resources.

Despite the convenience that these built-in accounts bring, you should be following the principles of least privilege when assigning accounts to SQL Server services. The accounts should have exactly the permissions necessary for the service to do its job. You should also use a different account for each service, with the permissions set up specifically to meet the needs of that account.

For example, SQL Server Agent will likely require a different set of permissions from Integration Services. Also, avoid assigning accounts being used by other services on the same server. In addition, you must take into consideration whether the service will need access to domain resources, such as a data file on a network share. Keep in mind too that service accounts should also be governed by good password management policies, such as enforcing complex passwords and expiration dates.

Failure #8: Failing to control access to SQL Server resources

Like all aspects of SQL Server access, account access should adhere to the principles of least privilege, assigning only the permissions necessary to perform specific functions. Along with this principle, we can add another-separation of duties. This ensures against conflicts of interest and the inadvertent combination of privileges that lead to excessive access.

All too often, database administrators (or someone) will assign one or two sets of generic privileges to all users, without thought to the consequences of such action. For example, an assistant bookkeeper might need access to sales totals and quantities but is granted access to the entire database, which includes sensitive customer information. That employee could abuse that position by modifying records or stealing data. Together, the principles of least privilege and separation of duties help to ensure that database owners maintain control over who can access SQL Server and its data and what levels of access they have.

This also goes for the applications and services accessing the database. They too should be using low-privileged accounts that grant only the access needed. In addition, be sure to remove unused default accounts created by third-party apps (as well as any other accounts not in use).

When possible, avoid granting individual accounts access to SQL Server and instead grant access to the security groups that contain these accounts, granting them the specific access they need. This helps to think in terms of separation of duties and makes it easier to manage large numbers of accounts. Also, consider disabling the guest accounts in your databases so that members of the public server role can’t access those databases unless they’ve been specifically granted the permissions.

Pay particular attention to how you’re assigning administrate permissions to users. For example, choose widely which accounts are added to the sysadmin fixed server role. Group members can do just about anything they want in SQL Server. And don’t use the SA admin account to manage SQL Server, as mentioned earlier.

You should also avoid assigning the CONTROL SERVER permission to individual accounts. It provides full administrative privileges over SQL Server and is already granted to the sysadmin group. Many database pros also recommend that you remove the Windows BUILTIN/Administrators group from the sysadmin role if it has been added. (It was added by default prior to SQL Server 2008.) Although removing the group lets you better control SQL Server access, you have to be careful doing so because it can impact your SQL Server installations.

Make use of SQL Server’s hierarchical permission structure in which the principals (users and roles) can be granted or denied to the securable (the databases and its objects). For example, you can grant principal access at the database, schema, or table level. In the following figure, the database role has been granted SELECT and INSERT privileges to the ZipCodes table, but denied UPDATE and DELETE privileges.

By taking this approach, you can ensure that users perform only specific tasks and not perform others. Again, the key is to follow the principles of least privilege and the separation of duties. This applies to administrative accounts as well as other types of accounts.

Failure #9: Failing to encrypt sensitive data

An organization less diligent about security might assume that, because SQL Server is a backend system, the databases are inherently more secure than public-facing components and be satisfied that the data is ensconced in a protective layer. But SQL Server still relies on network access and is consequently exposed enough to warrant protection at all levels. Add to this the possibility that physical components such as backup drives can be lost or stolen (most likely the latter), and you can’t help but realize that no protection should be overlooked.

Encryption is one such protection. Although it cannot prevent malicious attempts to access or intercept data, no more than it can prevent a drive from being stolen, it offers another safeguard for protecting your data, especially that super-sensitive stuff such as credit card information and social security numbers. That way, if the data has been accessed for less than ethical reasons, it is at least protected from prying eyes.

SQL Server supports the ability to encrypt data at rest and in motion. On the at-rest side, you have two options: cell-level encryption and Transparent Data Encryption (TDE). Cell-level has been around for a while and lets you encrypt individual columns. SQL Server encrypts the data before storing it and can retain the encrypted state in memory.

Introduced in SQL Server 2008, TDE encrypts the entire database, including the log files. The data is encrypted when it is written to disk and decrypted when being read from disk. The entire process is transparent to the clients and requires no special coding. TDE is generally recommended for its performance benefits and ease of implementation. If you need a more granular approach or are working with SQL Server 2005 or earlier, then go with cell-level encryption.

SQL Server can also use the Secure Sockets Layer (SSL) protocol to encrypt data transmitted over the network whether between SQL Server instances or between SQL Server and a client application. In this way, data can be protected throughout a session, making it possible to pass sensitive information over a network. Of course, SSL doesn’t protect data at rest, but when combined with TDE or cell-level encryption, data can be protected at every stage.

An important component of any encryption strategy is key management. Without going into all the gritty details of SQL Server key hierarchies, master keys, and symmetric and asymmetric keys, let’s just say you need to ensure these keys (or certificates) are fully protected. One strategy is to use symmetric keys to encrypt data and asymmetric keys to protect the symmetric keys. You should also password-protect keys, and always back up the master keys and certificates. Also, back up your database to maintain copies of your symmetric and asymmetric keys, and be sure those backups are secure.

Failure #10: Following careless coding practices

After all these years, after all the press, after all the countless articles and recommendations and best practices, SQL injection still remains a critical issue for SQL Server installations. Whether because of sloppy coding within SQL Server or within the web application passing in SQL, the problem persists. Hackers are able to insert malicious code into a string value passed to the database and in the process do all sorts of damage-deleting rows, corrupting data, retrieving sensitive information. Any code that accesses SQL Server data should be vetted for potential SQL injection and fixed before going into production. Fortunately, those fixes are often quite straightforward, such as validating user input or delimiting identifiers.

But SQL injection is not the only risk. Issues can arise if the execution context within procedures, functions, or triggers is not explicitly called out. By default, the code executes as the caller, but this can lead to problems if an account with elevated privileges has been compromised. However, as of SQL Server 2005, you have more control over the execution context (using EXECUTE AS) and can specify the account to use as the execution context, thus ensuring full control over a procedure’s capabilities.

Another way coders can protect their databases is to create procedures, functions, and views to present the data without providing access to the base tables. This helps to abstract the actual schema and allows access to be controlled at the abstraction layer, restricting access altogether to the tables themselves. This approach also has the benefit of more easily accommodating changes to the application as well as to the underlying data structure. In some cases, the database team won’t have this option because of application requirements or the technologies being used, such as a data abstraction layer, but when it is possible, providing this extra layer of protection can be well worth the effort.

There are, of course, plenty of other coding practices that can result in compromised data. The key is to make certain that all code is reviewed and tested for vulnerabilities before it is implemented in production.

Failure #11: Not verifying SQL Server implementations

Regardless of the safeguards you’ve implemented to protect your SQL Server databases and their data, security can be breached without you being aware that something is wrong. The only way you can fully protect your data is to monitor and verify your SQL Server installations. Monitoring in this sense does not refer to auditing (which we’ll discuss shortly), but to the process of assuring that everything is running as it should, the data is intact, and nothing too strange is going on.

For example, you should be monitoring CPU, memory, and disk usage, not only for performance considerations but also to ensure you’re not seeing any anomalies that might point to such issues as illegal data downloads or malware accessing or modifying the databases and their data. Activity monitoring can also be a useful stopgap measure until you get a chance to apply the latest security patches. You should take whatever steps might help you expose abuses that would otherwise go unnoticed until it’s too late.

You can also make use of such tools as DBCC CHECKDB, which can help uncover data corruption as a result of a cyber-attack or illegal access. Spot-checking security-related configuration settings can also be useful in discovering if a rogue user has gained access and is wreaking havoc on the permissions or other settings. In general, you want to check your databases to make sure they’re doing only what they’re supposed to be doing and that data is in the state you expect it to be.

You can also make use of such tools as SQL Server Best Practices Analyzer. The Analyzer is free and can help you gather information about security settings and identify vulnerabilities. The tool uses SQL Server recommendations and best practices to uncover potential security risks. And don’t forget the server hosting SQL Server. For example, you (or an IT administrator) can use the Microsoft Security Compliance Manager to enhance the server’s security via Group Policy.

Failure #12: Failing to audit your SQL Server instances

Auditing is a big topic, too big to cover in just a few paragraphs, but it is a topic too important not to mention. One of the most beneficial steps you can take as part of a complete security strategy is to implement auditing. If you’re not monitoring user activity, you might be opening yourself up to severe consequences in the form of corrupt or compromised data.

You have several options for auditing your systems. You can use SQL Server’s built-in features (which have greatly improved since SQL Server 2008), or you can implement one of many available third-party solutions, which usually come in the form of host-based agents or network-based monitors. There are pros and cons to any solution, with performance and cost being two of the main considerations. The important point is to maintain an audit trail of all access to sensitive data. Without such an audit trail, you risk being out of compliance as well as having your data compromised.

You will, of course, have to balance the overhead that comes with auditing against the need to protect sensitive information. You’ll have to take this on a case-by-case basis. Determine the level of granularity needed to ensure a credible audit as well as which operations and data should be audited. At the very least, you should audit both successful and failed login attempts. Also, keep in mind any compliance-related requirements that might govern your auditing strategy.

Advertisements


Metaspoilt is one of the most popular penetration testing frameworks. This is an opensource framework that also has a paid and supported enterprise version.
Metaspoilt is a preferred choice of penetration testers, ethical hackers, and security engineers worldwide.
Penetration testing is the first step in information security and ethical hacking field to identify vulnerabilities in any software.

Want to learn ethical hacking? This tool is a good starting point for you get started into the field of hacking and information security.

What Is Penetration Testing?

Penetration testing is a simple way to identify vulnerabilities in any software. This is commonly referred to as pen testing.
Penetration testing is performed by simulating an attack to exploit vulnerabilities in software. Usually, many different attacks are performed in a pen test and results are recorded in the form of a report.
Typically, a pen test report contains a list of all the vulnerabilities that were exploited and recommended actions to rectify those vulnerabilities.
A pen testers job is to run a variety of pen tests on a different kind of software to identify and report vulnerabilities before anyone exploits it.

Why Learn Metaspoilt?

Metaspoilt is a tool that is commonly used by ethical hackers for doing penetration testing. I think, below are some key reasons Metaspoilt is a tool worth learning.
  • The framework has the community support that makes it efficient in detecting vulnerabilities.
  • Its open source license makes the use and contribution easy.
  • It is well documented and help is easily available.

Metaspoilt Courses and Video Tutorials

Summary

Metaspoilt is a framework that you can learn quickly and start using for penetration testing. It is very popular and the tutorials are easily available. I hope you find this list useful to learn penetration testing.

Caintech.co.uk

When I see the words “free trial,” I know I’m probably going to have to whip out my credit card and enter in the number to “not get charged.” Then I end up forgetting about the trial and want to kick myself in the ass when I see my statement at the end of the month.

In order to avoid that rigmarole, you can actually use fake credit numbers instead of your own, and you can do that using the site getcreditcardnumbers.com, which can generate up to 9,999 credit card numbers at a time, or just one.

Now, to be completely clear, these numbers cannot be used to purchase any item. For that to work, you would need a valid expiration date and CVV or CSV number. This site merely provides the standard 16 digit credit card number that can be used to bypass certain online forms that only ask for the number.

How Does It Work?

The credit card number generator uses a system based off of the Luhn Algorithm, which has been used to validate numbers for decades. You can learn more about the algorithm on their webpage. A fake number will work for sites that store credit card information to either charge you later or ask you to upgrade.

For sites that ask for an upfront fee or have an automatic charge sometime down the line (Hulu Plus, Netflix, Spotify), this won’t work since they ask for more than just a credit card number for validation. You can, however, get unlimited free trials on those sites using a simple trick with your email address if you have a valid card number with expiration date and CSV.

Getting a Card Number on Android

There’s also an Android application for getting fake card numbers called CardGen, available for free in the Play Store. You can generate and validate credit card numbers directly from the app, making it easy to use on the go as well. Validation, in particular, would be useful if you were accepting credit card payments on your own site and wanted to make sure the cards were legit.

The app is ad-supported, but since it’s free, I can live with that. In the generate field you can select from most of the major credit card providers, including American Express, Mastercard, Visa, and Discover. The disclaimer explains what the app does and how you should use it.

What would you do with these credit card number generators? Let us know in the comments section.

You’ve probably heard that a strong password is really important to keep your accounts safe. You’ve also probably heard that people are still not creating good passwords. But even if you are—or at least you think you are—hackers are smart and they’ve figured out ingenious ways to crack what you think is a secure password.

Here’s how they do it:

Dashlane, a password manager tool, took a look at 61 million passwords from data breaches. These passwords were available to hackers, of course, but also to the public and even security researchers. To the surprise of precisely nobody, the biggest takeaway was that people’s passwords were far from original, and most of them were actually the same.

The most popular passwords were “Ferrari,” “iloveyou,” “starwars,” and of course “password1234.”

If you’re a hacker, let’s be honest, these aren’t hard to guess. And, in fact, there are tools out there that will help make life even easier.

“John the Ripper”

One of the most common tools is “John the Ripper.” This tool uses what’s known as a “dictionary attack,” where it takes a list of dictionary words and uses them to crack passwords. The tool can try millions of words in a short space of time, and it can do sneaky things like replacing an “a” with an “@” or an “e” with “3.”

In short, if your password contains a real word of any kind, even an inexperienced hacker can use a tool to figure it out in seconds.

Password walking

One other thing Dashlane noticed was that many people thought they were being creative by using a tactic called “password walking.” Basically, this is when you “walk” your fingers across the keyboard, hitting keys that are adjacent. This creates a password that looks unique and random, like “zxcvbn,” “1q2w3e4r,” or ‘poiuytr.”

While you might think a password such as this is secure, hackers know people use these tricks and can plug in any number of variations into their tools and test them out. Once again, in a matter of moments, a hacker will figure out your password.

Password formula

Some may think that a password formula based on the name of the particular website you are using is a smart idea. But, again, it’s hard to trick a hacker. This is especially true if a hacker figures out your “base password” (the part of your password that you use over and over again…another common tactic). They’ll then use that and try different variations, or other common combinations, to piece the puzzle together.

Let’s imagine, for instance, that you use the password “Porsche3$5^” for Twitter and “Porsche4%6&” for Facebook. All you did was change the second half and then went “password walking.” This is child’s play for hackers.

“How to hack passwords,” from a hacker himself

Here’s what goes on in the mind of a hacker, according to a person who has hacked thousands of accounts and documented his tactics on Lifehacker.

Follow his logic in this section taken from his article:

  • You probably use the same password for lots of stuff right?
  • Some sites you access such as your Bank or work VPN probably have pretty decent security, so I’m not going to attack them.
  • However, other sites like the Hallmark e-mail greeting cards site, an online forum you frequent, or an e-commerce site you’ve shopped at might not be as well prepared. So those are the ones I’d work on.
  • So, all we have to do now is unleash Brutus, wwwhack, or THC Hydra on their server with instructions  to try say 10,000 (or 100,000 – whatever makes you happy) different usernames and passwords as fast as possible.
  • Once we’ve got several login+password pairings we can then go back and test them on targeted sites.
  • But wait… How do I know which bank you use and what your login ID is for the sites you frequent? All those cookies are simply stored, unencrypted and nicely named, in your Web browser’s cache.

From this, you can see how the mind of a hacker works. And also how sophisticated (yet kind of simple) it is for them to figure things out.

And what’s not mentioned in this segment is the part your social media channels play—you know, where you talk about your favourite dog “Chappy” or your kid’s birthdate. Odds are, you probably use these personal details in your passwords. So, a quick search on Facebook and a hacker can find a few good words and numbers to plug into their hacking tool and figure out some viable options.

The moral of the story is this: Stop trying to come up with clever passwords based on names, places, or things in your life. Instead, use a password manager which automatically will create random passwords for all of your accounts. For example, my password manager just generated “ppwjK!C$p8g^2B” which is ridiculously strong and is highly unlikely to be guessed. And the added benefit is a password manager will remember the passwords, so you don’t have to.

Also, make sure your password is long. Here’s an image that shows just how much easier it is for a hacker to crack a short password, and what a difference it makes using a variety of characters rather than just lowercase letters.

From that same Lifehacker article:

Pay particular attention to the difference between using only lowercase characters and using all possible characters (uppercase, lowercase, and special characters – like @#$%^&*). Adding just one capital letter and one asterisk would change the processing time for an 8 character password from 2.4 days to 2.1 centuries.

how to hack passwords

Though you cannot stop your important accounts from getting breached, which is up to the organizations and companies that own them, you can do something on your end to minimize the chance of your password being hacked.

When it comes to web application security one often thinks about the obvious: Sanitize user input, transmit data over encrypted channels and use secure functions. Often overlooked are the positive effects that HTTP-Response-Headers in conjunction with a modern web browser can have on web security.

Active Security

Here we will take a look at the headers recommended by the Open Web Application Security Project (OWASP). These headers can be utilised to actively increase the security of the web application.

X-FRAME-OPTIONS

This header gives instructions to the browser if and when a page should be displayed as part of another page (i.e. in an IFRAME). Allowing a page to be loaded inside an IFRAME opens up the risk of a so called Clickjacking attack. In this attack the target site is loaded in the background, hidden to the victim. The victim is then enticed to perform clicks on the website (e.g. through a survey or a prize draw), these clicks are secretly executed on the target site in the background. If the victim is currently logged in to the target site then those clicks are performed in the context of this user’s session. Via this method it is possible to execute commands as the user, as well as exfiltrating information from the user’s context.

The X-Frame-Options can be used with the following options:

  • DENY
  • SAMEORIGIN
  • ALLOW-FROM ( is your desired URI, including protocol handler)

Unless your application explicitly requires to be loaded inside an IFRAME you should set the header to deny.

X-Frame-Options: DENY

If your application uses IFRAMEs within the application itself, then you should set the header to SAMEORIGIN:

X-Frame-Options: SAMEORIGIN

If you want your page to be frameable across a different origin, then you should explicitly define the external origin:

X-Frame-Options: ALLOW-FROM contextis.co.uk

Please note that the ALLOW-FROM directive of the X-Frame-Options header expects exactly one domain. No protocol, no port, no path and no wildcard.

STRICT-TRANSPORT-SECURITY

This header, often abbreviated as HSTS (HTTP Strict Transport Security), tells the browser to enforce an HTTPS connection whenever a user tries to reach the site sending this header.  All major browsers support this feature, and should:

  1. Only connect to the site via HTTPS
  2. Convert all HTTP references on the site (e.g. JavaScript includes) to HTTPS and
  3. Refuse to load the website in case of errors with the SSL certificate (e.g. Certificate expired, broken certificate chain, …)

It is important to notice that as this header can only be set via an HTTPS response, the user therefore needs to connect to the site at least once via HTTPS, unless you make some special preparations, more on that in a few sentences. It is also important to note that the header is only valid for a certain amount of time: the lifetime is specified in seconds. Context recommends the following setting, which tells the browser to obey the STS-Setting for half a year:

Strict-Transport-Security: max-age=15768000

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Strict-Transport-Security: max-age=15768000; includeSubDomains

Some browsers (at least Chrome, Firefox, IE11/Edge and Safari) ship with a “preload list”, a list of URLs that have explicitly declared that they want to use HSTS. If the users tries to access a listed URL, then the browser automatically enforces the HSTS rule, even for the very first connection, that otherwise would have been vulnerable to a man-in-the-middle attack. To enter your own websites to this preload list you have to submit the URL on this page, and append the preload directive to the header, e.g.:

Strict-Transport-Security: max-age=15768000; includeSubDomains; preload

X-XSS-PROTECTION

This header is surrounded by a little controversy and different people recommend different settings, some even recommend to explicitly disabling it. So what is the deal with this header?

The purpose of this header is to instruct the web browser to utilize its Cross-Site Scripting protection, if present (X-XSS-Protection: 1). Currently only Chrome, Internet Explorer and Safari have such an engine built-in and understand this header (Firefox seems to rely on the third party addon NoScript).

It might seem like a good idea to try and filter malicious requests where the attack happens, at the browser, but filtering is very hard. Especially when one tries to heuristically detect malicious code, sanitize it and at the same time try to maintain a working site. This lead to several filter bypasses and even introduced Cross-Site Scripting vulnerabilities on previously healthy sites.

Once it became apparent that a building a heuristic filter that tries to sanitize unknown code is a Sisyphus task, a new all or nothing approach was invented: X-XSS-Protection: 1; mode=block. If this mode is set the browser is instructed not to render the page at all, but instead displays an empty page (about:blank). But even that approach had flaws in its early implementations, leading some major sites (such as facebook.com, live.com and slack.com) to explicitly disable the XSS filter (X-XSS-Protection: 0).

So while it is difficult to give a definitive recommendation for this header, it seems that the variant ‘X-XSS-Protection: 1; mode=block’ has matured rather well and has outgrown its early flaws. Besides that, the best protection against Cross-Site Scripting is still sanitizing all your in- and output ;-).

To explicitly enable the filter that tries to sanitize malicious input set the following header:

X-XSS-Protection: 1;

To use the all-or-nothing approach that blocks a site when malicious input is detected set the following header:

X-XSS-Protection: 1; mode=block

Additional one can set a parameter ‘report’ that contains an URL. If one of the Webkit-XSS-Auditors (Chrome, Safari) encounters an attack it will send a POST message to this URL, containing details about the incident:

X-XSS-Protection: 1; mode=block; report=https://domain.tld/folder/file.ext

X-CONTENT-TYPE-OPTIONS

This header can be used to prevent certain versions of Internet Explorer from ‘sniffing’ the MIME type of a page. It is a feature of Internet Explorer to interpret sites of ‘Content-Type: text/plaintext’ as HTML when it contains HTML-tags. This however introduces cross-site scripting risks, when one has to deal with user provided content. The X-Content-Type-Options knows only one option – ‘nosniff’ – which prevents the browser from trying to sniff the MIME type.

X-Content-Type-Options: nosniff

PUBLIC-KEY-PINS

Public-Key-Pinning, also known as HTTP Public Key Pinning (short HPKP), is still relatively new and is not yet widely used. However it has great security potential as it allows site operators to specify (‘pin’) a valid certificate and rely less on CAs – that in the past have proven to be susceptible to attack (e.g. any CA could create a technically valid and trusted certificate that has not been issued by you) . Similar to HSTS, the browser is then supposed to remember this pin and only accept connections to a site if the certificate pin matches the pin provided by the header. This however means that an unexpected certificate change can leave visitors locked out of the web presence. For this reason it is required to provide a backup certificate pin that can be used when the first one fails. It also must include a max-age attribute, which once again, specifies the lifetime in seconds. Please bear in mind that this is also the potential lockout time for an unaware user.

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000;

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000; includeSubDomains

CONTENT-SECURITY-POLICY (SUPERSEDES X-CONTENT-SECURITY-POLICY AND X-WEBKIT-CSP)

The Content-Security-Policy (short CSP) is a flexible approach to specify which content in a site may be executed and which not. One of the current problems is that the web browser does not know which sources to trust and which not to trust, e.g. is a third-party JavaScript include from apis.google.com good or bad? The only proper solution to this is source whitelisting, where the developer specifies legitimate resource locations. A very basic example on how to allow JavaScript (script-src) from both the local site (‘self’) and apis.google.com:

Content-Security-Policy: script-src 'self' https://apis.google.com

CSP has a few additional keywords that allow for a very granular access control. It is important to notice that CSP is intended as a per-site model so that every site needs an own http-headers set.

Passive Security

The following headers do not actively enable any security related features, but rather have a passive impact on security, typically by revealing more information than necessary. By now it is well established that security by obscurity is a more than questionable security concept if you rely solely on it. However there is little to no gain to leave the cards lying open on the table and provide an attacker with valuable information, just don’t think that this alone would be enough.

COOKIE ATTRIBUTES (HTTPONLY AND SECURE)

Often overlooked are the special attributes that can be associated with cookies, which can drastically reduce the risks of cookie theft.

HttpOnly

The HttpOnly attribute tells the browser to deny JavaScript access to this cookie, making it more difficult to access via cross-site scripting.

Set-Cookie: cookie=xxxxxxxxx; HttpOnly

Secure

The secure attribute tells the browser to send this cookie only over an HTTPS connection. This should be the norm for all session and authentication related cookies, as it prevents easy intercepting via an unencrypted HTTP connection.

Set-Cookie: cookie=xxxxxxxxx; Secure

Of course these attributes can be combined:

Set-Cookie: cookie=xxxxxxxxx; HttpOnly; Secure

SERVER / X-POWERED-BY

Both of these headers advertise the server software in use and its version number. While these headers might be nice for debugging purposes they do not contribute to the user experience in any way and should either be omitted entirely or reduced to an amount that does not leak any version details.

CACHING DIRECTIVES

Another issue that is often overlooked is the caching of sensitive information by the browser. A browser frequently stores elements of a website to a local cache to speed up the browsing experience. While this behaviour is fine for non-sensitive sites and elements like graphics and stylesheet information it has to be avoided for sensitive information (e.g. pages from an authenticated area of a web application). This problem gets worse in a shared computing environment (e.g. office, school, internet café …), where other users can easily access your browser cache. To tell the browser (and possible intermediate caches such as proxies) not to store anything in its cache one should use the following directives:

Cache-Control: no-cache, no-store
Expires: 0
Pragma: no-cache

It is important to notice that the often encountered directive “Cache-Control: private” cannot be considered secure in a shared computing environment, as it allows the browser to store these elements in its cache.

ETag

The “Entity Tag” (short ETag) header is used for caching purposes. The server uses a special algorithm to calculate an individual ETag for every revision of a file it serves. The browser is then able to ask the server if the file is still available under this ETag. If it is, the server responds with a short HTTP 304 status telling the browser to use the locally cached version, otherwise it sends the full resource as part of an HTTP 200 status.

While this is a useful header you’ll sometimes find a reference to it in vulnerability related articles or reports. The problem is that certain versions of Apache (versions before 2.3.14) used to disclose the inode for the file that is being served in their default configuration. The inode can be used for further attacks, e.g. via the Network File System (NFS), that uses these inodes to create file handles. The problematic default configuration has been corrected in more recent Apache versions, but you should nonetheless make sure that your corresponding FileETag setting in httpd.conf does not contain the INode attribute. The following line is fine:

FileETag MTime Size

The following line is NOT: 

FileETag INode MTime Size

X-ROBOTS-TAG AND ROBOTS.TXT

The X-Robots-Tag header can be used to give search engines, which support this header, directives on how a page or file should be indexed. The advantage of the X-Robots-Tag over a single robots.txt file or the robots-meta-tag is that this header can be set and configured globally, can be adjusted to a very granular and flexible level (e.g. a regular expression that matches certain URLs). Sending a meta-tag with a media file – not possible. Sending an HTTP-header with a media file – no problem. It also has the advantage of disclosing information on a per request basis instead of a single file. Just think about the secret directories that you don’t want anyone to know about: Listing them in a robots.txt file with a disallow entry? Probably a bad idea since this lets everyone immediately know what you want to hide – you might as well just put a link on your website.

So should you ditch the robots.txt altogether and rely solely on the X-Robots-Tag? Probably not, instead combine them for the greatest compatibility. However keep in mind that the robots.txt file should only contain files and directories that you want to be indexed. You should neverlist files that you want to block, instead place a general disallow entry in the robots.txt:

An example to block everything:

User-Agent: *
Disallow: /

An example that tells the crawler to index everything under /Public/ but not the rest:

User-Agent: *
Allow: /Public/
Disallow: /

ADDING CUSTOM HEADERS IN VARIOUS HTTP SERVERS

Below you can find a general example on how to set a static custom HTTP header in different HTTP server software, as well as links to a more in-depth manual for setting more complex header rules.

Apache

For apache it is recommended to use the apache module ‘mod_headers’ to control http headers. The directive ‘Header’ can be placed almost anywhere in the configuration file, e.g.:

<Directory "/var/www/dir1">
    Options +Indexes
    Header set X-XSS-Protection “1; mode=block”
</Directory>

For a more detailed guide on how to set HTTP headers with mod_headers please refer to this page.

Internet Information Services (IIS)

For IIS there are two ways to set custom headers:

1. Via command line:

appcmd set config /section:httpProtocol /+customHeaders.[name='X-XSS-Protection',value='1; mode=block']

2. Via graphical interface Open IIS Manager and use the Connections pane to find the appropriate level you want to enable the header for. In the Home pane, double-click on ‘HTTP Response Headers’. Now look for the Actions pane and click on ‘Add…’ and set both the name and the value for the header you want to set. In our example the name would be ‘X-XSS-Protection’ and the value would be ‘1; mode=block’.

For a more detailed guide on how to set HTTP headers with IIS please refer to this page.

Lighttpd

For Lighthttpd it is recommended to use the module ‘mod_setenvs’ to control apache headers. The directive ‘setenv.add-response-header’ can be placed almost anywhere in the configuration file, e.g.:

setenv.add-response-header = (
      "X-XSS-Protection" => "1; mode=Block" 
    )

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.

NGINX

For NGINX it is recommended to use the module ‘ngx_http_headers_module’ to control http headers. The directive ‘add_header’ can be placed in the appropriate location in the configuration file, e.g.:

server {
    listen       80;
    server_name  domain1.com www.domain1.com;
    root         html;

    location / {
      add_header X-XSS-Protection 1; mode=Block always
    }
  }

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.

Summary and Conclusions

We have seen that there are quite a few more or less new HTTP headers that can actively contribute to a web site’s security. We have also seen that there are a few well-established headers that might be worth revisiting to decrease the amount of information that is leaked.

References

Following are a few references for the technically interested reader that wants to get a more in-depth understanding of the different headers as well as HTTP headers in general. Please refer to ‘Adding custom headers in various HTTP servers’ above, if you simply want to know how to activate the various headers in your HTTP server software.

We have all used sites such as bugcrowd.com but did you know there are some companies that offer bug bounties through their own website.

This list will help bug bounty hunters and security researchers to explore different bug bounty programs and responsible disclosure policies.

Company URL
The Atlantic https://www.theatlantic.com/responsible-disclosure-policy/
Rollbar Docs https://docs.rollbar.com/docs/responsible-disclosure-policy
Vulnerability Analysis https://vuls.cert.org/confluence/display/Wiki/Vulnerability+Disclosure+Policy
Ambassador Referral Software https://www.getambassador.com/responsible-disclosure-policy
NN Group https://www.nn-group.com/Footer-Pages/Ethical-hacking-NN-Groups-Responsible-Disclosure-Policy.htm
Octopus Deploy https://octopus.com/security/disclosure
Mimecast https://www.mimecast.com/responsible-disclosure/
Royal IHC https://www.royalihc.com/en/responsible-disclosure-policy
SignUp.com https://signup.com/responsible-disclosure-policy
MailTag https://www.mailtag.io/disclosure-policy
Fox-IT (ENG) https://www.fox-it.com/en/responsible-disclosure-policy/
Kaseya https://www.kaseya.com/legal/vulnerability-disclosure-policy
Vend https://www.vendhq.com/responsible-disclosure-policy
Gallagher Security https://security.gallagher.com/gallagher-responsible-disclosure-policy
Surevine https://www.surevine.com/responsible-disclosure-policy/
IKEA https://www.ikea.com/ms/en_US/responsible-disclosure/index.html
Bunq https://www.bunq.com/en/terms-disclosure
GitLab https://about.gitlab.com/disclosure/
Rocket.Chat https://rocket.chat/docs/contributing/security/responsible-disclosure-policy/
Quantstamp https://quantstamp.com/responsible-disclosure
WeTransfer https://wetransfer.com/legal/disclosure
18F https://18f.gsa.gov/vulnerability-disclosure-policy/
Veracode https://www.veracode.com/responsible-disclosure/responsible-disclosure-policy
Oracle https://www.oracle.com/support/assurance/vulnerability-remediation/disclosure.html
Mattermost https://about.mattermost.com/report-security-issue/
Freshworks Inc. https://www.freshworks.com/security/responsible-disclosure-policy
OV-chipkaart https://www.ov-chipkaart.nl/service-and-contact/responsible-disclosure-policy.htm
ICS-CERT https://ics-cert.us-cert.gov/ICS-CERT-Vulnerability-Disclosure-Policy
Netflix https://help.netflix.com/en/node/6657
RIPE Network https://www.ripe.net/support/contact/responsible-disclosure-policy
Pocketbook https://getpocketbook.com/responsible-disclosure-policy/
Salesforce Trust https://trust.salesforce.com/en/security/responsible-disclosure-policy/
Duo Security https://duo.com/labs/disclosure
EURid https://eurid.eu/nl/other-infomation/eurid-responsible-disclosure-policy/
Oslo Børs https://www.oslobors.no/ob_eng/Oslo-Boers/About-Oslo-Boers/Responsible-Disclosure
Marketo https://documents.marketo.com/legal/notices/responsible-disclosure-policy.pdf
FreshBooks https://www.freshbooks.com/policies/responsible-disclosure
BizMerlinHR https://www.bizmerlin.com/responsible-disclosure-policy
MWR InfoSecurity https://labs.mwrinfosecurity.com/mwr-vulnerability-disclosure-policy
KAYAK https://www.kayak.co.in/security
98point6 https://www.98point6.com/responsible-disclosure-policy/
AlienVault https://www.alienvault.com/documentation/usm-appliance/system-overview/how-to-submit-a-security-issue-to-alienvault.htm
Seafile https://www.seafile.com/en/responsible_disclosure_policy/
LevelUp https://www.thelevelup.com/security-response
BankID https://www.bankid.com/en/disclosure
Orion Health https://orionhealth.com/global/support/responsible-disclosure/
Aptible https://www.aptible.com/legal/responsible-disclosure/
NowSecure https://www.nowsecure.com/company/responsible-disclosure-policy/
Takealot.com https://www.takealot.com/help/responsible-disclosure-policy
Smokescreen https://www.smokescreen.io/responsible-disclosure-policy/
Royal Bank of Scotland https://personal.rbs.co.uk/personal/security-centre/responsible-disclosure.html
Flood IO https://flood.io/security
CERT.LV https://www.cert.lv/en/about-us/responsible-disclosure-policy
 Zero Day Initiative https://www.zerodayinitiative.com/advisories/disclosure_policy/
Geckoboard https://support.geckoboard.com/hc/en-us/articles/115007061468-Responsible-Disclosure-Policy
Internedservices https://www.internedservices.nl/en/responsible-disclosure-policy/
FloydHub https://www.floydhub.com/about/security
Practo https://www.practo.com/company/responsible-disclosure-policy
Zimbra https://wiki.zimbra.com/wiki/Zimbra_Responsible_Disclosure_Policy
Cyber Safety https://www.utwente.nl/en/cyber-safety/responsible/
Port of Rotterdam https://www.portofrotterdam.com/en/responsible-disclosure
Georgia Institute of … http://www.policylibrary.gatech.edu/information-technology/responsible-disclosure-policy
NautaDutilh https://www.nautadutilh.com/nl/responsible-disclosure/
BitSight Technologies https://www.bitsighttech.com/responsible-disclosure
BOSCH https://psirt.bosch.com/en/responsibleDisclosurePolicy.html
CARD.com https://www.card.com/responsible-disclosure-policy
SySS GmbH https://www.syss.de/en/responsible-disclosure-policy/
Mailtrack https://mailtrack.io/en/responsible-vulnerability
Pinterest https://policy.pinterest.com/en/responsible-disclosure-statement
PostNL https://www.postnl.nl/en/responsible-disclosure/
Pellustro https://pellustro.com/responsible-disclosure-policy/
iWelcome https://www.iwelcome.com/responsible-disclosure/
Hacking as a Service https://hackingasaservice.deloitte.nl/Home/ResponsibleDisclosure
N.V. Nederlandse Gasunie https://www.gasunie.nl/en/responsible-disclosure
Hostinger https://www.hostinger.co.uk/responsible-disclosure-policy
SiteGround https://www.siteground.com/blog/responsible-disclosure/
Odoo https://www.odoo.com/security-report
Thumbtack https://help.thumbtack.com/article/responsible-disclosure-policy
ChatShipper http://chatshipper.com/responsible-disclosure-policy/
ServerBiz https://server.biz/en/legal/responsible-disclosure
Palo Alto Networks https://www.paloaltonetworks.com/security-disclosure

  1. wifite
    Link Project: https://github.com/derv82/wifite
    Wifite is for Linux only.Wifite is an automated wireless attack tool.Wifite was designed for use with pentesting distributions of Linux, such as Kali LinuxPentooBackBox; any Linux distributions with wireless drivers patched for injection. The script appears to also operate with Ubuntu 11/10, Debian 6, and Fedora 16.Wifite must be run as root. This is required by the suite of programs it uses. Running downloaded scripts as root is a bad idea. I recommend using the Kali Linux bootable Live CD, a bootable USB stick (for persistent), or a virtual machine. Note that Virtual Machines cannot directly access hardware so a wireless USB dongle would be required.Wifite assumes that you have a wireless card and the appropriate drivers that are patched for injection and promiscuous/monitor mode.
  2. wifiphisher
    Link Project: https://github.com/sophron/wifiphisher
    Wifiphisher is a security tool that performs Wi-Fi automatic association attacks to force wireless clients to unknowingly connect to an attacker-controlled Access Point. It is a rogue Access Point framework that can be used to mount automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares. It can work a social engineering attack tool that unlike other methods it does not include any brute forcing. It is an easy way for obtaining credentials from captive portals and third party login pages (e.g. in social networks) or WPA/WPA2 pre-shared keys.Wifiphisher works on Kali Linux and is licensed under the GPL license.
  3. wifi-pumpkin
    Link Project: https://github.com/P0cL4bs/WiFi-Pumpkin
    Very friendly graphic user interface, good handling, my favorite one is the establishment of phishing wifi attack tools, rich functional interface, ease of use is excellent. Compatibility is also very good. Researcher  is actively update them, we can continue to focus on this fun project
  4. fruitywifi
    Link Project: https://github.com/xtr4nge/FruityWifi
    FruityWifi is an open source tool to audit wireless networks. It allows the user to deploy advanced attacks by directly using the web interface or by sending messages to it.
    Initially the application was created to be used with the Raspberry-Pi, but it can be installed on any Debian based system
  5. mama toolkit
    Link Project: https://github.com/sensepost/mana
    A toolkit for rogue access point (evilAP) attacks first presented at Defcon 22.
    More specifically, it contains the improvements to KARMA attacks we implemented into hostapd, as well as some useful configs for conducting MitM once you’ve managed to get a victim to connect.
  6. 3vilTwinAttacker
    Link Project:https://github.com/wi-fi-analyzer/3vilTwinAttacker
    Much like wifi-pumpkin interface. Has a good graphical interface, the overall experience is very good, good ease of use. Good compatibility. Researcher has hardly been updated.
  7. ghost-phisher
    Link Project: http://tools.kali.org/information-gathering/ghost-phisher
    It has a good graphical interface, but almost no fault tolerance, many options easily confusing, but the overall feeling is still very good use. It can be a key to establish rogue ap, and protect dhcp, dns services interface, easy to launch a variety of middle attack, ease of use is good. Compatible good. Kali has been made official team updated original repo.
  8. fluxion
    Link Project: https://github.com/wi-fi-analyzer/fluxion
    Fluxion is a remake of linset by vk496 with (hopefully) less bugs and more functionality. It’s compatible with the latest release of Kali (rolling). The attack is mostly manual, but experimental versions will automatically handle most functionality from the stable releases.

Happy Hunting