Top 5 SIM Cloning Tools To Clone SIM Card

Posted: 03/09/2021 in Uncategorized
Tags: ,

Cloning a SIM card is’t the easiest thing to do but these 5 applications make it easy, give them a try and tell us how you got on.

1. MOBILedit

Download URL:

MOBILedit is a popular SIM duplicator that can be used to format a SIM card or modify it pretty easily. You can clone a SIM card, copy its content, and create customized cards as well. The entire SIM cloning tool comes with a pack of cards that can readily be used and a SIM card cloning software.

  • • The toolkit consists of rewritable SIM cards and a cloning software
  • • It doesn’t require any authentication or matching of the PIN to clone the SIM card.
  • • It supports multiple readers with the transfer of all the essential data.
  • • Users can also format an old SIM card using its software

2. Magic SIM

Download URL:

If you are looking for a lightweight and easy to use SIM card cloning software, then you can also give Magic SIM a try. It is only a SIM duplicator program that is available for Windows PC. Therefore, you have to buy a SIM card reader/writer and an empty SIM separately.

  • • All the GSM V1 SIM cards can be copied with this SIM cloning tool
  • • The desktop application is compatible with every major version of Windows
  • • It can copy all the major kinds of data like contacts, logs, messages, and more.
  • • Has an easy to use interface

3. USB Cell Phone SIM Card Cloner

Download URL:

The USB Cell Phone SIM Card Cloner provides a trouble-free way to copy your data from one SIM card to another. The SIM cloning tool comes with dedicated software and a USB adapter. You can attack your SIM card to the adapter and connect it to your system. Later, you can use its SIM card clone app to copy it.

  • • The SIM duplicator supports multiple cards
  • • It can be used to back up the contents of a SIM card as well.
  • • Users can easily modify or copy one SIM card’s content to another
  • • Comes with a USB adapter and its own SIM card cloning software

4. SIM Explorer by Dekart

Download URL:

A highly advanced SIM card clone app, SIM Explorer by Dekart, will certainly meet every requirement of yours. It performs a live and offline SIM card analysis, making sure that the card is not tampered with. The SIM cloning tool supports three scanning methods – manual, smart, and full. In this way, you can easily use this SIM duplicator to migrate to another phone easily.

  • • It can view and edit GSM SIM, 3G USIM, and CDMA R-UIM cards
  • • You can also obtain in-depth information related to the SIM by opening it in read-only mode.
  • • By providing the ADM codes, you can edit the inserted SIM card easily.
  • • The tool can also be used to perform a backup of your SIM card.

5. Mister SIM

Download URL:

Developed by Mobistar, Mister SIM is another popular SIM card clone app that has been around for a long time. It works as a complete SIM management tool that can help you take a backup of your SIM data and copy it from one device to another. Apart from contacts, you can also copy messages, call logs, and other vital information.

  • • Provides a fast and easy way to manage your SIM data
  • • Users can easily copy the content of their SIM to PC or another SIM card
  • • Move from one device to another without losing your data or numbers

Many companies spend a fortune on Next Generation anti-virus and Machine Learning “AI” tools to halt the spread of ransomware and although I strongly believe that user education and training plays a key part in this Windows does can help in a massive way. Windows File Services Resource Manager (FSRM) a resource already built into Windows can halt the spread and quarantine accounts that are affected.

This solution utilises PowerShell and Windows File Services Resource Manager to automatically lockout a user account when ransomware activities are detected.

Installing FSRM
First and foremost, you will need to set up FSRM on your file servers. This feature is part of the File Services Role and can be installed with the following PowerShell command (all one line).

Install-WindowsFeature –Name FS-Resource-Manager

Take note, FSRM is only available on Windows Server. If you’re interested in workstation mitigation, comment below and I’ll get to writing!

Get Email Alerts
In order to be emailed of the action our killswitch takes, we will need to set up the SMTP Server settings within FSRM. We don’t necessarily have to do this right now, but it saves us from seeing annoying prompts in the future steps.

Open up Server Manager > File and Storage Services > Right-click on your server > File Server Resource Manager (this can also be accessed through Administrative Tools). Once opened, right-click “File Server Resource Manager (Local)” in the left pane and select “Configure Options…” Go ahead and set up all your email settings, similar to below.

Set up Killswitch Directory
In your corporate file share(s), set up a directory that begins with an underscore. If the ransomware is encrypting alphabetically, this will ensure that it is tripped as soon as possible. Within that directory, we will place a text file called killswitch.txt.

Set Up the Killswitch
Many variants of ransomware look to find mapped drives and will begin encrypting data in alphabetical order. Because of this, our killswitch is going to be a directory placed in the file shares that begins with an underscore.

Create a new File Group under File Screening Management that will look at all files except our killswitch.txt.

Next, we will create a File Screen Template utilizing the File Group we created called “All File Types”.

We will want to configure email alerts, so on the E-Mail Message tab, fill out the pertinent information.

We also want to automate the removal of the offending user in order to stop the ransomware from encrypting our entire file server. We will do this with some PowerShell. Copy the following and save it to your preferred location. In this example, I’m just saving it to C:\kickuser.ps1.

param( [string]$username = “” ) Get-SmbShare -Special $false | ForEach-Object { Block-SmbShareAccess -Name $_.Name -AccountName “$username” -Force }

On the Command-Tab, check “Run this command or script:” and the following:


For the command arguments, insert the following:

-Command “& {C:\smbblock.ps1 -username ‘[Source Io Owner]’}”

Set it to run as Local System.

Apply the File Screen
From within FSRM, Select File Screening Management > File Screens and create a new File Screen. Set the path to your underscore directory and use the “Detect Ransomware” File Screen template that we created earlier.


To test, I created a test account (test guy) and modified the file. I was instantly locked out of the share. The output of our PowerShell script, as well as the share permissions, show this:

testing 567


Wrapping Up
This methodology should help mitigate some risk around ransomware attacks. In the future, it may also be beneficial to make the following changes:

  1. Create a secondary killswitch in a ZZZ_Killswitch directory in case a ransomware-variant starts in reverse-alphabetical order.

I believe in using the resources we already have available to us in helping secure our organisations, and hopefully, this helps. Feel free to comment with any questions or suggestions.


What is Remote SSH Tunneling

Posted: 07/09/2019 in Uncategorized

What is Remote SSH Tunneling?

There are two different ways to make an SSH tunnel. They are Local and Remote Port Forwarding. Here we will be talking about the remote ssh tunnelling. Imagine that you belong to a company where there are plenty of internal websites available only inside the network of such company. But you are in need of getting connected to these websites from outside the network by a remote machine. What could be the solution?

This situation could be suitable for using a VPN to get connected to the network of the company without any hassles. This solution though requires some work which could be out of hand and cannot be established at the moment. Creating a reverse remote SSH tunnel is the perfect choice in this case.

Now, the command should get executed on the work machine to get connected to the remote device and let’s call it home machine. The connection should consider the home machine as a client and the work machine as the remote SSH server. Yet, why should the configuration be on the work machine in the first place? Because the outbound traffic is allowed while the incoming traffic is blocking.

The following command will work perfectly for the desired solution:
ssh -R home (Executed from 'work')

Please note that in the previous code snippet shows that a remote port forwarding is used “R” and the port to be forwarded is 9001 on the home machine while the remote host is that And of course, the port to which forwarding happens is 80, and it resides on the work machine. Now all the requests of the home machine to utilize port 9001 of the work machine will lead to a connection to that internal website.

On the home machine, typing the following URL will simply open the website from home with no magic: http://localhost:9001

It is important to note that the channel between the work and the internal site is just not encrypted traffic, yet the connection between the work machine and the home machine is of course encrypted using SSH channel.

How can remote HTTP Tunneling be performed using SSH? 

A remote SSH connection is to get established between a home machine and a work machine which can connect to the internal office server and any website there. Reading such documents could be done on the home machine as well through such remote SSH tunnel.

This enables anybody on the remote server to interface with TCP port 8080 on the remote server. The association will then be tunnelled back to the customer have, and the customer at that point makes a TCP association with port 80 on localhost. Some other hostname or IP address could be utilized rather than localhost to indicate the host to associate with.

Throughout the following lines, an HTTP connection is to get established between remote PC and client-server, where both machines do not belong to the same network. Let’s take the following five points for granted before we get to start essentially:

  1. There is an SSH server which is two Ethernet interface.
  2. The local IP address is
  3. While the IP address of the remote system is, residing outside of the network in the first place.
  4. The IP address of is connected to another local network system of IP address of
  5. The Ubuntu client has the following IP address:

The following steps are to get followed for the sake of establishing the Remote SSH tunnelling:

  1. open the terminal and type the following command to get the network configuration:
  2. The configuration of SSH server should now show that there are two IP addresses connected: and
  3. The configuration of SSH server should also appear after typing the command mentioned above. The following IP address should appear as running as an SSH client on Ubuntu:
  4. On the remote desktop, the command line prompt (cmd) could be used to know the IP for it, it should show in our case the IP address of:
  5. Because we are using for this case HTTP tunnelling, this means that the service will run on port 80 of Xampp server at localhost.
  6. If the website is WordPress, it shall then work on port 80.
  7. Such a website could be reached by the SSH server through the following URL then:
  8. The remote desktop will be connected to through such URL. This only holds for devices on the same network. Yet, if each of them resides on a different network from the other, then it will cause a problem.
  9. Let’s verify this fact by trying to communicate with the URL of on Ubuntu client which is on another network. This connection will not get established due to the dissimilarities of each one’s network.
  10. Make use of PuTTY software now to get a connection established between remote desktop and Ubuntu client.
  11. Under “Host Name (or IP address)”, get the IP of “” typed.
  12. Under “Port” section, type “22”. And choose the connection type as “SSH”
  13. Now, navigate to “Tunnel” residing under “SSH” in the left part of the screen titled “Category”
  14. Under “Port forwarding”, get the first option marked. It is the option of “Local ports accept connections from other hosts”.
  15. Besides “Source Port” type “7000”
  16. Choose the “Destination” as “”
  17. Choose the connection as “Remote”
  18. Press “Add” in order to get these changes applied.
  19. Finally, press “Open” after getting done with the last point.
  20. The connection between the remote pc and the Ubuntu client now will happen in two consecutive stages. First, a connection between remote pc and SSH will get established. Then, such server will connect the remote desktop to the Ubuntu client.
  21. Browsing now on the following URL: will yield into opening the WordPress website through connecting to the localhost of the remote desktop, starting the SSH server on port 7000.
  22. Now, this means that we have done the task successfully and both the remote desktop and the Ubuntu client became connected.


How to Get SQL Server Security Horribly Wrong

Posted: 31/01/2019 in Uncategorized
Tags: , ,

It is no good doing some or most of the aspects of SQL Server security right. You have to get them all right because any effective penetration of your security is likely to spell disaster. If you fail in any of the ways listed below, then you can’t assume that your data is secure, and things are likely to go horribly wrong.

Failure #1: Not securing the physical environment

A database team might go through extraordinary measures to protect its SQL Server instances and the data they contain, but not necessarily extend that diligence to the physical environment, often because the assets themselves are not in the team’s control. Yet physical devices-such as servers, backup drives, or disks in a storage area network (SAN)-can be compromised just like any software or network component, so whoever is in charge of those devices better take heed.

All it takes is one disgruntled employee with access to the physical assets to turn an organization upside down. Before anyone knows what happens, the individual can walk off with a backup drive full of credit card numbers, health records, company secrets, or other sensitive information, resulting in permanent damage to the organization’s reputation and financial well-being. Even if the data is encrypted, an employee with the necessary privileges, access to the security certificates, and a bit of know-how can make full use of that data on the open market.

Organizations serious about data security must implement defence in depth strategies that protect all layers of the data infrastructure, including the physical components, whether from inside threats or those coming from outside the organization. Every server and disk that hosts a SQL Server instance or data file must be physically protected so that only those who absolutely need access to those devices can get at them. That includes devices used for replications, mirrored databases, backups, or log files. The number of workers who have access to the physical devices should be strictly limited, with security measures scrutinized and updated regularly.

Another challenge, perhaps even more difficult to address, is trying to “enlighten” those wayward employees with privileged access who are careless with their desktops and laptops, leaving them unattended and unlocked for varying lengths of time. Anyone with less than honourable intentions can stroll along and change or copy sensitive data with a little bit of insight into the systems, not only making the careless user look bad, but again putting the company’s reputation and financial happiness on the line. If the imprudent user is a creature of habit, the miscreant thief has an even easier time getting at the sensitive data. Given the possibility of such a scenario, the only hope for the organization is employee education and IT policies that try to mitigate such risks as much as possible.

Failure #2: Not protecting the server environments

SQL Server can spread across environments far and wide, its impact reaching into machines and folders on the other side of the known network. Installation files might be located on a nearby server, log files on another device, data files on the new SAN down the hall, and backups in a remote data centre. Even within the host, SQL Server also makes use of the operating system (OS) files and the registry, spreading its impact even further.

Anything SQL Server touches should be protected to minimize the potential for a SQL Server instance or its data being compromised. For example, a database might contain sensitive data such as patient health records. Some of that data can end up in the log files at any given time, depending on the current operations. Even if all other files are carefully protected, a less-then-secure log file can have far-reaching implications if the network or attached devices are compromised.

Database and IT teams should take every precaution to protect the files and folders that support SQL Server operations, including OS files, SQL Server installation files, certificate stores, data and log files everything. Administrators should not assign elevated privileges to any related folder and should not share any of those folders on the network. Access to the files and folders should be limited to the greatest degree possible, granting access only to those accounts that need it. Teams should always keep in mind the principles of least privilege when it comes to NTFS permissions, that is, granting access only to the degree absolutely necessary and nothing more. The same goes for the machine as a whole. Don’t assign administrative access to a server when a user requires only limited privileges.

Failure #3: Implementing inadequate network security

When you consider all the ways a network can be hacked, it’s remarkable we can do anything safely. Weak network security can lead to viruses and network attacks and compromised data of unimaginable magnitude. Cyber-criminals able to exploit network flaws are not only able to inflict damage, but also download boatloads of data before being detected, whether credit card information, social security numbers, company strategies, or a host of other types of sensitive data.

An organization must do everything possible to protect its network and the way SQL Server connects with that network. One place to start is to ensure that you’re running the necessary antivirus software on all your systems and that the virus definitions are always up to date. Also be sure that firewalls are enabled on the servers and that the software is current. Firewalls provide network protection at the OS level and help to enforce an organization’s security policies.

In addition, configure SQL Server to use TCP/IP ports other than default one (such as 1433 or 1434). The default ports are well known and are common targets for hackers, viruses, and Trojan horses. In addition, configure each instance to use a static port, rather than allowing the number to be assigned dynamically. You might not be able to ward off all attacks, but this way you’re at least obscuring the pathways.

To further obscure your SQL Server instance, consider disabling the Browser service to prevent snoopers from being able to search for it on the network. Clients can still access the instance as long as their connections specify the correct port number or named pipe. If disabling the Browser service is not an option, at least hide any instances you don’t want easily discovered, once again forcing any client connections to have to provide the port number or named pipe.

In addition, you should disable the network protocols that are not needed. For example, it’s unlikely you need to have both the Named Pipes and TCP/IP protocols enabled. In addition, you should grant the CONNECT permission only the endpoints or logins that require it; for all others, explicitly deny the permission. If possible, avoid exposing the server that’s running SQL Server to the Internet altogether.

Failure #4: Not updating and patching your systems

Some of you might recall what happened in 2003. The SQL Slammer computer worm infected over 75,000 servers running SQL Server. Within 10 minutes of deployment, the worm infected more than 90% of vulnerable computers and took down thousands of databases. Slammer exploited a buffer-overflow vulnerability in SQL Server to carry out an unprecedented denial-of-service (DoS) attack. The rest is cyber history.

The interesting part of this story is that the bug had actually been discovered long before the attack. In fact, Microsoft provided a fix six months in advance of the DoS onslaught. The organizations that got hit by Slammer had failed to patch their SQL Server instances.

Because of the onslaught of threats aimed at SQL Server and other systems, companies like Microsoft are regularly releasing service packs and patches and security fixes and other types of updates to address any newly found vulnerabilities. What these vendors can’t do, however, is force their customers to apply the updates.

In all fairness, there’s a good reason why many DBAs and other administrator types can be slow to getting around to patching their systems. Applying these fixes can be a big deal. They take time and testing and careful consideration. If you apply an update to a production server without first testing it, you risk taking down the entire system. Overworked DBAs must have the time in their schedules and the resources necessary to apply these updates right.

Yet such incidents as the Slammer debacle demonstrate how critical it is to keep your SQL Server instances up to date. Not only can such attacks lead to excessive resource consumption, server downtime, and corrupt data, but also to even more serious concerns. The information gathered from a successful DoS attack can be used to launch subsequent attacks, some of which might be aimed at getting at the actual data.

Microsoft is constantly releasing critical updates and fixes for both Windows and SQL Server. You can manually download and install them, or set up your systems to update the software automatically if you can afford to take such risks. More often than not, you’ll want to first test the update and schedule a time to implement it into a production environment. It is no small matter, certainly, but keeping Windows and SQL Server current is one of the most effective strategies you can employ for protecting your system’s data.

Failure #5: Maintaining a large surface attack area

We’re not talking rocket science here, just common sense. The more SQL Server features and services you have enabled, the larger the surface attack area and the more vulnerable your system is to potential attacks. When setting up SQL Server, you should install only those services you need and disable those features you don’t plan to use anytime soon. Only the services and features for which there is a credible business need should be running.

Cybercriminals are a diligent lot, and the more you give them to work with, the happier they are. Given the proliferation of zero-day exploits, the less you can leave yourself exposed, the better. Why install Analysis Services or Integration Services on a production server when all you need is the database engine?

Some features in particularly can raise the stakes on your system’s vulnerability. For example, xp_cmdshell let’s you execute a command against the Windows host. If that feature is enabled on a SQL Server instance and a hacker gains access, xp_cmdshell can open doors to the OS itself. Other features too should be enabled only if needed, such as the Active Directory Helper service or VSS Writer service. You even have to be cautious with OPENROWSET and OPENDATASOURCE because they can facilitate access to external systems. In addition, you should consider disabling mail-related features that are not needed, such as the Database Mail service or the sp_send_mail system stored procedure. And don’t install sample databases or code on your production servers.

If you’re running SQL Server 2005, you can use the Surface Area Configuration (SAC) tool to enable or disable installed components. For whatever components SAC doesn’t address, you can use Configuration Manager, the sp_configure system stored procedure, or a tool such as Windows Management Instrumentation.

Starting with SQL Server 2008, the Policy-Based Management system replaces SAC, providing one location to address all your surface configuration needs, in addition to configuring other components. For example, the following dialog box shows a new policy named Surface area conditions, with the Remote queries disabled condition selected. The condition specifies that the OPENROWSETand OPENDATASOURCE functions should be disabled.

Whichever tools you use, the important point is to install and enable only those components you need. You can always add or enable other features later. Just don’t give hackers any more ammunition than they already have.

Failure #6: Using improper authentication

SQL Server supports two authentication modes: Windows Authentication and Mixed Mode. Windows Authentication, the default authentication type, leverages Windows local accounts and Active Directory network accounts to facilitate access to the SQL Server instance and its databases. Mixed mode authentication supports both Windows accounts and SQL Server accounts (SQL logins). You can specify the authentication mode when installing SQL Server or change it after SQL Server has been installed by updating the server properties, as shown in the following figure.

Microsoft (and just about everyone else) recommends that you use Windows Authentication whenever possible, falling back on Mixed mode only to support backward compatibility, access to SQL Server instances outside the domain, and legacy applications. Windows Authentication is more secure because it can take advantage of the Windows and Active Directory mechanisms in place to protect user credentials, such as the Kerberos protocol or Windows NT LAN Manager (NTLM). Windows Authentication also uses encrypted messages to authorize access to SQL Server, rather than passing passwords across the network. In addition, because Windows accounts are trusted, SQL Server can take advantage of Active Directory’s group and user account administrative and password management capabilities.

SQL Server accounts and passwords are stored and managed within SQL Server, with passwords being passed across the network to facilitate authentication. Not only does this make the authentication process less secure, but it also means you can’t take advantage of the password controls available to Active Directory. In addition, you must be diligent about managing the passwords within SQL Server, such as mandating strong passwords and expiration dates as well as requiring new logins to change their passwords. You must also guard against accounts being created with blank passwords, including accounts created by third-party products. Weak passwords leave your database open to brute force attacks and other nefarious activities. Blank passwords open up your databases to everyone and everything.

Also with Mixed Mode authentication, you have to be particularly wary of that powerful built-in SA account. At one time, this had actually been created with a blank password. Fortunately, that has not been the case since SQL Server 2005. Many database pros recommend that you rename the SA account. Others suggest you disable it and never use it. Still others would have you do all three, not a bad strategy, given how well known the account is and its history of abuse. At the very least, you should assign a complex password to the account and keep it disabled.

It should also be noted the SQL Server creates the SA account even if Windows Authentication is the selected mode, assigning it a randomly generated password while disabling the account. You might consider applying a complex password in this case as well, and certainly don’t enable the account.

Failure #7: Assigning the wrong service accounts

Each SQL Server service requires a Windows account to run so the service can access resources such as networks and file folders. Because orchestrating this access can be somewhat cumbersome, services are often assigned high-privileged built-in accounts such as Local System or Network Service, thus preventing those mysterious SQL Server error messages that can pop up at peculiar moments, providing information that is so obscure that even the Google gods cannot offer an answer.

The problem with using these accounts is that, if SQL Server were to be compromised, the OS too could be compromised-to the degree that the service account has access. These built-in accounts can actually inherit elevated privileges in Active Directory, which are not required in SQL Server. For example, you can easily assign the Local System account to a SQL Server service, as shown in the following figure.

The Local System account is a very high-privileged account with extensive access to the local system. It is like granting a user administrative privileges to the server. If SQL Server were to be compromised, the would-be hacker could potentially have unrestricted access to the machine and its resources.

Despite the convenience that these built-in accounts bring, you should be following the principles of least privilege when assigning accounts to SQL Server services. The accounts should have exactly the permissions necessary for the service to do its job. You should also use a different account for each service, with the permissions set up specifically to meet the needs of that account.

For example, SQL Server Agent will likely require a different set of permissions from Integration Services. Also, avoid assigning accounts being used by other services on the same server. In addition, you must take into consideration whether the service will need access to domain resources, such as a data file on a network share. Keep in mind too that service accounts should also be governed by good password management policies, such as enforcing complex passwords and expiration dates.

Failure #8: Failing to control access to SQL Server resources

Like all aspects of SQL Server access, account access should adhere to the principles of least privilege, assigning only the permissions necessary to perform specific functions. Along with this principle, we can add another-separation of duties. This ensures against conflicts of interest and the inadvertent combination of privileges that lead to excessive access.

All too often, database administrators (or someone) will assign one or two sets of generic privileges to all users, without thought to the consequences of such action. For example, an assistant bookkeeper might need access to sales totals and quantities but is granted access to the entire database, which includes sensitive customer information. That employee could abuse that position by modifying records or stealing data. Together, the principles of least privilege and separation of duties help to ensure that database owners maintain control over who can access SQL Server and its data and what levels of access they have.

This also goes for the applications and services accessing the database. They too should be using low-privileged accounts that grant only the access needed. In addition, be sure to remove unused default accounts created by third-party apps (as well as any other accounts not in use).

When possible, avoid granting individual accounts access to SQL Server and instead grant access to the security groups that contain these accounts, granting them the specific access they need. This helps to think in terms of separation of duties and makes it easier to manage large numbers of accounts. Also, consider disabling the guest accounts in your databases so that members of the public server role can’t access those databases unless they’ve been specifically granted the permissions.

Pay particular attention to how you’re assigning administrate permissions to users. For example, choose widely which accounts are added to the sysadmin fixed server role. Group members can do just about anything they want in SQL Server. And don’t use the SA admin account to manage SQL Server, as mentioned earlier.

You should also avoid assigning the CONTROL SERVER permission to individual accounts. It provides full administrative privileges over SQL Server and is already granted to the sysadmin group. Many database pros also recommend that you remove the Windows BUILTIN/Administrators group from the sysadmin role if it has been added. (It was added by default prior to SQL Server 2008.) Although removing the group lets you better control SQL Server access, you have to be careful doing so because it can impact your SQL Server installations.

Make use of SQL Server’s hierarchical permission structure in which the principals (users and roles) can be granted or denied to the securable (the databases and its objects). For example, you can grant principal access at the database, schema, or table level. In the following figure, the database role has been granted SELECT and INSERT privileges to the ZipCodes table, but denied UPDATE and DELETE privileges.

By taking this approach, you can ensure that users perform only specific tasks and not perform others. Again, the key is to follow the principles of least privilege and the separation of duties. This applies to administrative accounts as well as other types of accounts.

Failure #9: Failing to encrypt sensitive data

An organization less diligent about security might assume that, because SQL Server is a backend system, the databases are inherently more secure than public-facing components and be satisfied that the data is ensconced in a protective layer. But SQL Server still relies on network access and is consequently exposed enough to warrant protection at all levels. Add to this the possibility that physical components such as backup drives can be lost or stolen (most likely the latter), and you can’t help but realize that no protection should be overlooked.

Encryption is one such protection. Although it cannot prevent malicious attempts to access or intercept data, no more than it can prevent a drive from being stolen, it offers another safeguard for protecting your data, especially that super-sensitive stuff such as credit card information and social security numbers. That way, if the data has been accessed for less than ethical reasons, it is at least protected from prying eyes.

SQL Server supports the ability to encrypt data at rest and in motion. On the at-rest side, you have two options: cell-level encryption and Transparent Data Encryption (TDE). Cell-level has been around for a while and lets you encrypt individual columns. SQL Server encrypts the data before storing it and can retain the encrypted state in memory.

Introduced in SQL Server 2008, TDE encrypts the entire database, including the log files. The data is encrypted when it is written to disk and decrypted when being read from disk. The entire process is transparent to the clients and requires no special coding. TDE is generally recommended for its performance benefits and ease of implementation. If you need a more granular approach or are working with SQL Server 2005 or earlier, then go with cell-level encryption.

SQL Server can also use the Secure Sockets Layer (SSL) protocol to encrypt data transmitted over the network whether between SQL Server instances or between SQL Server and a client application. In this way, data can be protected throughout a session, making it possible to pass sensitive information over a network. Of course, SSL doesn’t protect data at rest, but when combined with TDE or cell-level encryption, data can be protected at every stage.

An important component of any encryption strategy is key management. Without going into all the gritty details of SQL Server key hierarchies, master keys, and symmetric and asymmetric keys, let’s just say you need to ensure these keys (or certificates) are fully protected. One strategy is to use symmetric keys to encrypt data and asymmetric keys to protect the symmetric keys. You should also password-protect keys, and always back up the master keys and certificates. Also, back up your database to maintain copies of your symmetric and asymmetric keys, and be sure those backups are secure.

Failure #10: Following careless coding practices

After all these years, after all the press, after all the countless articles and recommendations and best practices, SQL injection still remains a critical issue for SQL Server installations. Whether because of sloppy coding within SQL Server or within the web application passing in SQL, the problem persists. Hackers are able to insert malicious code into a string value passed to the database and in the process do all sorts of damage-deleting rows, corrupting data, retrieving sensitive information. Any code that accesses SQL Server data should be vetted for potential SQL injection and fixed before going into production. Fortunately, those fixes are often quite straightforward, such as validating user input or delimiting identifiers.

But SQL injection is not the only risk. Issues can arise if the execution context within procedures, functions, or triggers is not explicitly called out. By default, the code executes as the caller, but this can lead to problems if an account with elevated privileges has been compromised. However, as of SQL Server 2005, you have more control over the execution context (using EXECUTE AS) and can specify the account to use as the execution context, thus ensuring full control over a procedure’s capabilities.

Another way coders can protect their databases is to create procedures, functions, and views to present the data without providing access to the base tables. This helps to abstract the actual schema and allows access to be controlled at the abstraction layer, restricting access altogether to the tables themselves. This approach also has the benefit of more easily accommodating changes to the application as well as to the underlying data structure. In some cases, the database team won’t have this option because of application requirements or the technologies being used, such as a data abstraction layer, but when it is possible, providing this extra layer of protection can be well worth the effort.

There are, of course, plenty of other coding practices that can result in compromised data. The key is to make certain that all code is reviewed and tested for vulnerabilities before it is implemented in production.

Failure #11: Not verifying SQL Server implementations

Regardless of the safeguards you’ve implemented to protect your SQL Server databases and their data, security can be breached without you being aware that something is wrong. The only way you can fully protect your data is to monitor and verify your SQL Server installations. Monitoring in this sense does not refer to auditing (which we’ll discuss shortly), but to the process of assuring that everything is running as it should, the data is intact, and nothing too strange is going on.

For example, you should be monitoring CPU, memory, and disk usage, not only for performance considerations but also to ensure you’re not seeing any anomalies that might point to such issues as illegal data downloads or malware accessing or modifying the databases and their data. Activity monitoring can also be a useful stopgap measure until you get a chance to apply the latest security patches. You should take whatever steps might help you expose abuses that would otherwise go unnoticed until it’s too late.

You can also make use of such tools as DBCC CHECKDB, which can help uncover data corruption as a result of a cyber-attack or illegal access. Spot-checking security-related configuration settings can also be useful in discovering if a rogue user has gained access and is wreaking havoc on the permissions or other settings. In general, you want to check your databases to make sure they’re doing only what they’re supposed to be doing and that data is in the state you expect it to be.

You can also make use of such tools as SQL Server Best Practices Analyzer. The Analyzer is free and can help you gather information about security settings and identify vulnerabilities. The tool uses SQL Server recommendations and best practices to uncover potential security risks. And don’t forget the server hosting SQL Server. For example, you (or an IT administrator) can use the Microsoft Security Compliance Manager to enhance the server’s security via Group Policy.

Failure #12: Failing to audit your SQL Server instances

Auditing is a big topic, too big to cover in just a few paragraphs, but it is a topic too important not to mention. One of the most beneficial steps you can take as part of a complete security strategy is to implement auditing. If you’re not monitoring user activity, you might be opening yourself up to severe consequences in the form of corrupt or compromised data.

You have several options for auditing your systems. You can use SQL Server’s built-in features (which have greatly improved since SQL Server 2008), or you can implement one of many available third-party solutions, which usually come in the form of host-based agents or network-based monitors. There are pros and cons to any solution, with performance and cost being two of the main considerations. The important point is to maintain an audit trail of all access to sensitive data. Without such an audit trail, you risk being out of compliance as well as having your data compromised.

You will, of course, have to balance the overhead that comes with auditing against the need to protect sensitive information. You’ll have to take this on a case-by-case basis. Determine the level of granularity needed to ensure a credible audit as well as which operations and data should be audited. At the very least, you should audit both successful and failed login attempts. Also, keep in mind any compliance-related requirements that might govern your auditing strategy.

When I see the words “free trial,” I know I’m probably going to have to whip out my credit card and enter in the number to “not get charged.” Then I end up forgetting about the trial and want to kick myself in the ass when I see my statement at the end of the month.

In order to avoid that rigmarole, you can actually use fake credit numbers instead of your own, and you can do that using the site, which can generate up to 9,999 credit card numbers at a time, or just one.

Now, to be completely clear, these numbers cannot be used to purchase any item. For that to work, you would need a valid expiration date and CVV or CSV number. This site merely provides the standard 16 digit credit card number that can be used to bypass certain online forms that only ask for the number.

How Does It Work?

The credit card number generator uses a system based off of the Luhn Algorithm, which has been used to validate numbers for decades. You can learn more about the algorithm on their webpage. A fake number will work for sites that store credit card information to either charge you later or ask you to upgrade.

For sites that ask for an upfront fee or have an automatic charge sometime down the line (Hulu Plus, Netflix, Spotify), this won’t work since they ask for more than just a credit card number for validation. You can, however, get unlimited free trials on those sites using a simple trick with your email address if you have a valid card number with expiration date and CSV.

Getting a Card Number on Android

There’s also an Android application for getting fake card numbers called CardGen, available for free in the Play Store. You can generate and validate credit card numbers directly from the app, making it easy to use on the go as well. Validation, in particular, would be useful if you were accepting credit card payments on your own site and wanted to make sure the cards were legit.

The app is ad-supported, but since it’s free, I can live with that. In the generate field you can select from most of the major credit card providers, including American Express, Mastercard, Visa, and Discover. The disclaimer explains what the app does and how you should use it.

What would you do with these credit card number generators? Let us know in the comments section.

You’ve probably heard that a strong password is really important to keep your accounts safe. You’ve also probably heard that people are still not creating good passwords. But even if you are—or at least you think you are—hackers are smart and they’ve figured out ingenious ways to crack what you think is a secure password.

Here’s how they do it:

Dashlane, a password manager tool, took a look at 61 million passwords from data breaches. These passwords were available to hackers, of course, but also to the public and even security researchers. To the surprise of precisely nobody, the biggest takeaway was that people’s passwords were far from original, and most of them were actually the same.

The most popular passwords were “Ferrari,” “iloveyou,” “starwars,” and of course “password1234.”

If you’re a hacker, let’s be honest, these aren’t hard to guess. And, in fact, there are tools out there that will help make life even easier.

“John the Ripper”

One of the most common tools is “John the Ripper.” This tool uses what’s known as a “dictionary attack,” where it takes a list of dictionary words and uses them to crack passwords. The tool can try millions of words in a short space of time, and it can do sneaky things like replacing an “a” with an “@” or an “e” with “3.”

In short, if your password contains a real word of any kind, even an inexperienced hacker can use a tool to figure it out in seconds.

Password walking

One other thing Dashlane noticed was that many people thought they were being creative by using a tactic called “password walking.” Basically, this is when you “walk” your fingers across the keyboard, hitting keys that are adjacent. This creates a password that looks unique and random, like “zxcvbn,” “1q2w3e4r,” or ‘poiuytr.”

While you might think a password such as this is secure, hackers know people use these tricks and can plug in any number of variations into their tools and test them out. Once again, in a matter of moments, a hacker will figure out your password.

Password formula

Some may think that a password formula based on the name of the particular website you are using is a smart idea. But, again, it’s hard to trick a hacker. This is especially true if a hacker figures out your “base password” (the part of your password that you use over and over again…another common tactic). They’ll then use that and try different variations, or other common combinations, to piece the puzzle together.

Let’s imagine, for instance, that you use the password “Porsche3$5^” for Twitter and “Porsche4%6&” for Facebook. All you did was change the second half and then went “password walking.” This is child’s play for hackers.

“How to hack passwords,” from a hacker himself

Here’s what goes on in the mind of a hacker, according to a person who has hacked thousands of accounts and documented his tactics on Lifehacker.

Follow his logic in this section taken from his article:

  • You probably use the same password for lots of stuff right?
  • Some sites you access such as your Bank or work VPN probably have pretty decent security, so I’m not going to attack them.
  • However, other sites like the Hallmark e-mail greeting cards site, an online forum you frequent, or an e-commerce site you’ve shopped at might not be as well prepared. So those are the ones I’d work on.
  • So, all we have to do now is unleash Brutus, wwwhack, or THC Hydra on their server with instructions  to try say 10,000 (or 100,000 – whatever makes you happy) different usernames and passwords as fast as possible.
  • Once we’ve got several login+password pairings we can then go back and test them on targeted sites.
  • But wait… How do I know which bank you use and what your login ID is for the sites you frequent? All those cookies are simply stored, unencrypted and nicely named, in your Web browser’s cache.

From this, you can see how the mind of a hacker works. And also how sophisticated (yet kind of simple) it is for them to figure things out.

And what’s not mentioned in this segment is the part your social media channels play—you know, where you talk about your favourite dog “Chappy” or your kid’s birthdate. Odds are, you probably use these personal details in your passwords. So, a quick search on Facebook and a hacker can find a few good words and numbers to plug into their hacking tool and figure out some viable options.

The moral of the story is this: Stop trying to come up with clever passwords based on names, places, or things in your life. Instead, use a password manager which automatically will create random passwords for all of your accounts. For example, my password manager just generated “ppwjK!C$p8g^2B” which is ridiculously strong and is highly unlikely to be guessed. And the added benefit is a password manager will remember the passwords, so you don’t have to.

Also, make sure your password is long. Here’s an image that shows just how much easier it is for a hacker to crack a short password, and what a difference it makes using a variety of characters rather than just lowercase letters.

From that same Lifehacker article:

Pay particular attention to the difference between using only lowercase characters and using all possible characters (uppercase, lowercase, and special characters – like @#$%^&*). Adding just one capital letter and one asterisk would change the processing time for an 8 character password from 2.4 days to 2.1 centuries.

how to hack passwords

Though you cannot stop your important accounts from getting breached, which is up to the organizations and companies that own them, you can do something on your end to minimize the chance of your password being hacked.

When it comes to web application security one often thinks about the obvious: Sanitize user input, transmit data over encrypted channels and use secure functions. Often overlooked are the positive effects that HTTP-Response-Headers in conjunction with a modern web browser can have on web security.

Active Security

Here we will take a look at the headers recommended by the Open Web Application Security Project (OWASP). These headers can be utilised to actively increase the security of the web application.


This header gives instructions to the browser if and when a page should be displayed as part of another page (i.e. in an IFRAME). Allowing a page to be loaded inside an IFRAME opens up the risk of a so called Clickjacking attack. In this attack the target site is loaded in the background, hidden to the victim. The victim is then enticed to perform clicks on the website (e.g. through a survey or a prize draw), these clicks are secretly executed on the target site in the background. If the victim is currently logged in to the target site then those clicks are performed in the context of this user’s session. Via this method it is possible to execute commands as the user, as well as exfiltrating information from the user’s context.

The X-Frame-Options can be used with the following options:

  • DENY
  • ALLOW-FROM ( is your desired URI, including protocol handler)

Unless your application explicitly requires to be loaded inside an IFRAME you should set the header to deny.

X-Frame-Options: DENY

If your application uses IFRAMEs within the application itself, then you should set the header to SAMEORIGIN:

X-Frame-Options: SAMEORIGIN

If you want your page to be frameable across a different origin, then you should explicitly define the external origin:

X-Frame-Options: ALLOW-FROM

Please note that the ALLOW-FROM directive of the X-Frame-Options header expects exactly one domain. No protocol, no port, no path and no wildcard.


This header, often abbreviated as HSTS (HTTP Strict Transport Security), tells the browser to enforce an HTTPS connection whenever a user tries to reach the site sending this header.  All major browsers support this feature, and should:

  1. Only connect to the site via HTTPS
  2. Convert all HTTP references on the site (e.g. JavaScript includes) to HTTPS and
  3. Refuse to load the website in case of errors with the SSL certificate (e.g. Certificate expired, broken certificate chain, …)

It is important to notice that as this header can only be set via an HTTPS response, the user therefore needs to connect to the site at least once via HTTPS, unless you make some special preparations, more on that in a few sentences. It is also important to note that the header is only valid for a certain amount of time: the lifetime is specified in seconds. Context recommends the following setting, which tells the browser to obey the STS-Setting for half a year:

Strict-Transport-Security: max-age=15768000

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Strict-Transport-Security: max-age=15768000; includeSubDomains

Some browsers (at least Chrome, Firefox, IE11/Edge and Safari) ship with a “preload list”, a list of URLs that have explicitly declared that they want to use HSTS. If the users tries to access a listed URL, then the browser automatically enforces the HSTS rule, even for the very first connection, that otherwise would have been vulnerable to a man-in-the-middle attack. To enter your own websites to this preload list you have to submit the URL on this page, and append the preload directive to the header, e.g.:

Strict-Transport-Security: max-age=15768000; includeSubDomains; preload


This header is surrounded by a little controversy and different people recommend different settings, some even recommend to explicitly disabling it. So what is the deal with this header?

The purpose of this header is to instruct the web browser to utilize its Cross-Site Scripting protection, if present (X-XSS-Protection: 1). Currently only Chrome, Internet Explorer and Safari have such an engine built-in and understand this header (Firefox seems to rely on the third party addon NoScript).

It might seem like a good idea to try and filter malicious requests where the attack happens, at the browser, but filtering is very hard. Especially when one tries to heuristically detect malicious code, sanitize it and at the same time try to maintain a working site. This lead to several filter bypasses and even introduced Cross-Site Scripting vulnerabilities on previously healthy sites.

Once it became apparent that a building a heuristic filter that tries to sanitize unknown code is a Sisyphus task, a new all or nothing approach was invented: X-XSS-Protection: 1; mode=block. If this mode is set the browser is instructed not to render the page at all, but instead displays an empty page (about:blank). But even that approach had flaws in its early implementations, leading some major sites (such as, and to explicitly disable the XSS filter (X-XSS-Protection: 0).

So while it is difficult to give a definitive recommendation for this header, it seems that the variant ‘X-XSS-Protection: 1; mode=block’ has matured rather well and has outgrown its early flaws. Besides that, the best protection against Cross-Site Scripting is still sanitizing all your in- and output ;-).

To explicitly enable the filter that tries to sanitize malicious input set the following header:

X-XSS-Protection: 1;

To use the all-or-nothing approach that blocks a site when malicious input is detected set the following header:

X-XSS-Protection: 1; mode=block

Additional one can set a parameter ‘report’ that contains an URL. If one of the Webkit-XSS-Auditors (Chrome, Safari) encounters an attack it will send a POST message to this URL, containing details about the incident:

X-XSS-Protection: 1; mode=block; report=https://domain.tld/folder/file.ext


This header can be used to prevent certain versions of Internet Explorer from ‘sniffing’ the MIME type of a page. It is a feature of Internet Explorer to interpret sites of ‘Content-Type: text/plaintext’ as HTML when it contains HTML-tags. This however introduces cross-site scripting risks, when one has to deal with user provided content. The X-Content-Type-Options knows only one option – ‘nosniff’ – which prevents the browser from trying to sniff the MIME type.

X-Content-Type-Options: nosniff


Public-Key-Pinning, also known as HTTP Public Key Pinning (short HPKP), is still relatively new and is not yet widely used. However it has great security potential as it allows site operators to specify (‘pin’) a valid certificate and rely less on CAs – that in the past have proven to be susceptible to attack (e.g. any CA could create a technically valid and trusted certificate that has not been issued by you) . Similar to HSTS, the browser is then supposed to remember this pin and only accept connections to a site if the certificate pin matches the pin provided by the header. This however means that an unexpected certificate change can leave visitors locked out of the web presence. For this reason it is required to provide a backup certificate pin that can be used when the first one fails. It also must include a max-age attribute, which once again, specifies the lifetime in seconds. Please bear in mind that this is also the potential lockout time for an unaware user.

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000;

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000; includeSubDomains


The Content-Security-Policy (short CSP) is a flexible approach to specify which content in a site may be executed and which not. One of the current problems is that the web browser does not know which sources to trust and which not to trust, e.g. is a third-party JavaScript include from good or bad? The only proper solution to this is source whitelisting, where the developer specifies legitimate resource locations. A very basic example on how to allow JavaScript (script-src) from both the local site (‘self’) and

Content-Security-Policy: script-src 'self'

CSP has a few additional keywords that allow for a very granular access control. It is important to notice that CSP is intended as a per-site model so that every site needs an own http-headers set.

Passive Security

The following headers do not actively enable any security related features, but rather have a passive impact on security, typically by revealing more information than necessary. By now it is well established that security by obscurity is a more than questionable security concept if you rely solely on it. However there is little to no gain to leave the cards lying open on the table and provide an attacker with valuable information, just don’t think that this alone would be enough.


Often overlooked are the special attributes that can be associated with cookies, which can drastically reduce the risks of cookie theft.


The HttpOnly attribute tells the browser to deny JavaScript access to this cookie, making it more difficult to access via cross-site scripting.

Set-Cookie: cookie=xxxxxxxxx; HttpOnly


The secure attribute tells the browser to send this cookie only over an HTTPS connection. This should be the norm for all session and authentication related cookies, as it prevents easy intercepting via an unencrypted HTTP connection.

Set-Cookie: cookie=xxxxxxxxx; Secure

Of course these attributes can be combined:

Set-Cookie: cookie=xxxxxxxxx; HttpOnly; Secure


Both of these headers advertise the server software in use and its version number. While these headers might be nice for debugging purposes they do not contribute to the user experience in any way and should either be omitted entirely or reduced to an amount that does not leak any version details.


Another issue that is often overlooked is the caching of sensitive information by the browser. A browser frequently stores elements of a website to a local cache to speed up the browsing experience. While this behaviour is fine for non-sensitive sites and elements like graphics and stylesheet information it has to be avoided for sensitive information (e.g. pages from an authenticated area of a web application). This problem gets worse in a shared computing environment (e.g. office, school, internet café …), where other users can easily access your browser cache. To tell the browser (and possible intermediate caches such as proxies) not to store anything in its cache one should use the following directives:

Cache-Control: no-cache, no-store
Expires: 0
Pragma: no-cache

It is important to notice that the often encountered directive “Cache-Control: private” cannot be considered secure in a shared computing environment, as it allows the browser to store these elements in its cache.


The “Entity Tag” (short ETag) header is used for caching purposes. The server uses a special algorithm to calculate an individual ETag for every revision of a file it serves. The browser is then able to ask the server if the file is still available under this ETag. If it is, the server responds with a short HTTP 304 status telling the browser to use the locally cached version, otherwise it sends the full resource as part of an HTTP 200 status.

While this is a useful header you’ll sometimes find a reference to it in vulnerability related articles or reports. The problem is that certain versions of Apache (versions before 2.3.14) used to disclose the inode for the file that is being served in their default configuration. The inode can be used for further attacks, e.g. via the Network File System (NFS), that uses these inodes to create file handles. The problematic default configuration has been corrected in more recent Apache versions, but you should nonetheless make sure that your corresponding FileETag setting in httpd.conf does not contain the INode attribute. The following line is fine:

FileETag MTime Size

The following line is NOT: 

FileETag INode MTime Size


The X-Robots-Tag header can be used to give search engines, which support this header, directives on how a page or file should be indexed. The advantage of the X-Robots-Tag over a single robots.txt file or the robots-meta-tag is that this header can be set and configured globally, can be adjusted to a very granular and flexible level (e.g. a regular expression that matches certain URLs). Sending a meta-tag with a media file – not possible. Sending an HTTP-header with a media file – no problem. It also has the advantage of disclosing information on a per request basis instead of a single file. Just think about the secret directories that you don’t want anyone to know about: Listing them in a robots.txt file with a disallow entry? Probably a bad idea since this lets everyone immediately know what you want to hide – you might as well just put a link on your website.

So should you ditch the robots.txt altogether and rely solely on the X-Robots-Tag? Probably not, instead combine them for the greatest compatibility. However keep in mind that the robots.txt file should only contain files and directories that you want to be indexed. You should neverlist files that you want to block, instead place a general disallow entry in the robots.txt:

An example to block everything:

User-Agent: *
Disallow: /

An example that tells the crawler to index everything under /Public/ but not the rest:

User-Agent: *
Allow: /Public/
Disallow: /


Below you can find a general example on how to set a static custom HTTP header in different HTTP server software, as well as links to a more in-depth manual for setting more complex header rules.


For apache it is recommended to use the apache module ‘mod_headers’ to control http headers. The directive ‘Header’ can be placed almost anywhere in the configuration file, e.g.:

<Directory "/var/www/dir1">
    Options +Indexes
    Header set X-XSS-Protection “1; mode=block”

For a more detailed guide on how to set HTTP headers with mod_headers please refer to this page.

Internet Information Services (IIS)

For IIS there are two ways to set custom headers:

1. Via command line:

appcmd set config /section:httpProtocol /+customHeaders.[name='X-XSS-Protection',value='1; mode=block']

2. Via graphical interface Open IIS Manager and use the Connections pane to find the appropriate level you want to enable the header for. In the Home pane, double-click on ‘HTTP Response Headers’. Now look for the Actions pane and click on ‘Add…’ and set both the name and the value for the header you want to set. In our example the name would be ‘X-XSS-Protection’ and the value would be ‘1; mode=block’.

For a more detailed guide on how to set HTTP headers with IIS please refer to this page.


For Lighthttpd it is recommended to use the module ‘mod_setenvs’ to control apache headers. The directive ‘setenv.add-response-header’ can be placed almost anywhere in the configuration file, e.g.:

setenv.add-response-header = (
      "X-XSS-Protection" => "1; mode=Block" 

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.


For NGINX it is recommended to use the module ‘ngx_http_headers_module’ to control http headers. The directive ‘add_header’ can be placed in the appropriate location in the configuration file, e.g.:

server {
    listen       80;
    root         html;

    location / {
      add_header X-XSS-Protection 1; mode=Block always

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.

Summary and Conclusions

We have seen that there are quite a few more or less new HTTP headers that can actively contribute to a web site’s security. We have also seen that there are a few well-established headers that might be worth revisiting to decrease the amount of information that is leaked.


Following are a few references for the technically interested reader that wants to get a more in-depth understanding of the different headers as well as HTTP headers in general. Please refer to ‘Adding custom headers in various HTTP servers’ above, if you simply want to know how to activate the various headers in your HTTP server software.

We have all used sites such as but did you know there are some companies that offer bug bounties through their own website.

This list will help bug bounty hunters and security researchers to explore different bug bounty programs and responsible disclosure policies.

Company URL
The Atlantic
Rollbar Docs
Vulnerability Analysis
Ambassador Referral Software
NN Group
Octopus Deploy
Royal IHC
Fox-IT (ENG)
Gallagher Security
Freshworks Inc.
RIPE Network
Salesforce Trust
Duo Security
Oslo Børs
MWR InfoSecurity
Orion Health
Royal Bank of Scotland
Flood IO
 Zero Day Initiative
Cyber Safety
Port of Rotterdam
Georgia Institute of …
BitSight Technologies
Hacking as a Service
N.V. Nederlandse Gasunie
Palo Alto Networks

  1. wifite
    Link Project:
    Wifite is for Linux only.Wifite is an automated wireless attack tool.Wifite was designed for use with pentesting distributions of Linux, such as Kali LinuxPentooBackBox; any Linux distributions with wireless drivers patched for injection. The script appears to also operate with Ubuntu 11/10, Debian 6, and Fedora 16.Wifite must be run as root. This is required by the suite of programs it uses. Running downloaded scripts as root is a bad idea. I recommend using the Kali Linux bootable Live CD, a bootable USB stick (for persistent), or a virtual machine. Note that Virtual Machines cannot directly access hardware so a wireless USB dongle would be required.Wifite assumes that you have a wireless card and the appropriate drivers that are patched for injection and promiscuous/monitor mode.
  2. wifiphisher
    Link Project:
    Wifiphisher is a security tool that performs Wi-Fi automatic association attacks to force wireless clients to unknowingly connect to an attacker-controlled Access Point. It is a rogue Access Point framework that can be used to mount automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares. It can work a social engineering attack tool that unlike other methods it does not include any brute forcing. It is an easy way for obtaining credentials from captive portals and third party login pages (e.g. in social networks) or WPA/WPA2 pre-shared keys.Wifiphisher works on Kali Linux and is licensed under the GPL license.
  3. wifi-pumpkin
    Link Project:
    Very friendly graphic user interface, good handling, my favorite one is the establishment of phishing wifi attack tools, rich functional interface, ease of use is excellent. Compatibility is also very good. Researcher  is actively update them, we can continue to focus on this fun project
  4. fruitywifi
    Link Project:
    FruityWifi is an open source tool to audit wireless networks. It allows the user to deploy advanced attacks by directly using the web interface or by sending messages to it.
    Initially the application was created to be used with the Raspberry-Pi, but it can be installed on any Debian based system
  5. mama toolkit
    Link Project:
    A toolkit for rogue access point (evilAP) attacks first presented at Defcon 22.
    More specifically, it contains the improvements to KARMA attacks we implemented into hostapd, as well as some useful configs for conducting MitM once you’ve managed to get a victim to connect.
  6. 3vilTwinAttacker
    Link Project:
    Much like wifi-pumpkin interface. Has a good graphical interface, the overall experience is very good, good ease of use. Good compatibility. Researcher has hardly been updated.
  7. ghost-phisher
    Link Project:
    It has a good graphical interface, but almost no fault tolerance, many options easily confusing, but the overall feeling is still very good use. It can be a key to establish rogue ap, and protect dhcp, dns services interface, easy to launch a variety of middle attack, ease of use is good. Compatible good. Kali has been made official team updated original repo.
  8. fluxion
    Link Project:
    Fluxion is a remake of linset by vk496 with (hopefully) less bugs and more functionality. It’s compatible with the latest release of Kali (rolling). The attack is mostly manual, but experimental versions will automatically handle most functionality from the stable releases.

Happy Hunting

he windows passwords can be accessed in a number of different ways. The most common way would be via accessing the Security Accounts Manager (SAM) file and obtaining the system passwords in their hashed form with a number of different tools. Alternatively passwords can be read from memory which has the added benefit of recovering the passwords in plain text and avoiding the cracking requirement. In order to understand the formats you’ll see when dumping Windows system hashes a brief overview of the different storage formats is required.

Lan Manager (LM) Hashes
Originally windows passwords shorter than 15 characters were stored in the Lan Manager (LM) hash format. Some OSes such as Windows 2000, XP and Server 2003 continue to use these hashes unless disabled. Occasionally an OS like Vista may store the LM hash for backwards compatibility with other systems. Due to numerous reasons this hash is simply terrible. It includes several poor design decisions from Microsoft such as splitting the password into two blocks and allowing each to be cracked independently. Through the use of rainbow tables which will be explained later it’s trivial to crack a password stored in a LM hash regardless of complexity. This hash is then stored with the same password calculated in the NT hash format in the following format: ::::::

An example of a dumped NTLM hash with the LM ant NT component. Administrator:500:611D6F6E763B902934544489FCC9192B:B71ED1E7F2B60ED5A2EDD28379D45C91:::

NT Hashes
Newer Windows operating systems use the NT hash. In simple terms there is no significant weakness in this hash that sets it apart from any other cryptographic hash function. Cracking methods such as brute force, rainbow tables or word lists are required to recover the password if it’s only stored in the NT format.

An example of a dumped NTLM hash with only the NT component (as seen on newer systems.
Administrator:500:NO PASSWORD*********************:EC054D40119570A46634350291AF0F72:::

It’s worth noting the “no password” string is variable based on the tool. Others may present this information as padded zeros, or commonly you may see the string “AAD3B435B51404EEAAD3B435B51404EE” in place of no password. This signifies that the LM hash is empty and not stored.

The hashes are located in the Windows\System32\config directory using both the SAM and SYSTEM files. In addition it’s also located in the registry file HKEY_LOCAL_MACHINE\SAM which cannot be accessed during run time. Finally backup copies can be often found in Windows\Repair.

Tool – PwDump7 –
This tool can be executed on the system machine to recover the system hashes. Simply download the run the binary with at least administrator account privileges.

Tool – Windows Credential Editor –
Windows Credentials Editor (WCE) is great for dumping passwords that are in memory. Personally I typically use it with the -w flag to dump passwords in clear text. This can often net you passwords that are infeasible to get any other way.

Tool – Meterpreter
If you have a meterpreter shell on the system, often you can get the hashes by calling the hashdump command.

Method – Recovery Directory
Occasionally you may not have direct access to the file required, or perhaps even command line interaction with the victim. An example of this would be a local file inclusion attack on a web service. In those cases it’s recommended you try and recover the SYSTEM and SAM directories located in the Windows\Repair directory.

Method – Live CD
Sometimes you may have physical access to the computer but wish to dump the passwords for cracking later. Using a Live CD is a common method of being able to mount the Windows drive and recover the SYSTEM and SAM files from the System32/config directory since the OS isn’t preventing you access.