When it comes to web application security one often thinks about the obvious: Sanitize user input, transmit data over encrypted channels and use secure functions. Often overlooked are the positive effects that HTTP-Response-Headers in conjunction with a modern web browser can have on web security.

Active Security

Here we will take a look at the headers recommended by the Open Web Application Security Project (OWASP). These headers can be utilised to actively increase the security of the web application.

X-FRAME-OPTIONS

This header gives instructions to the browser if and when a page should be displayed as part of another page (i.e. in an IFRAME). Allowing a page to be loaded inside an IFRAME opens up the risk of a so called Clickjacking attack. In this attack the target site is loaded in the background, hidden to the victim. The victim is then enticed to perform clicks on the website (e.g. through a survey or a prize draw), these clicks are secretly executed on the target site in the background. If the victim is currently logged in to the target site then those clicks are performed in the context of this user’s session. Via this method it is possible to execute commands as the user, as well as exfiltrating information from the user’s context.

The X-Frame-Options can be used with the following options:

  • DENY
  • SAMEORIGIN
  • ALLOW-FROM ( is your desired URI, including protocol handler)

Unless your application explicitly requires to be loaded inside an IFRAME you should set the header to deny.

X-Frame-Options: DENY

If your application uses IFRAMEs within the application itself, then you should set the header to SAMEORIGIN:

X-Frame-Options: SAMEORIGIN

If you want your page to be frameable across a different origin, then you should explicitly define the external origin:

X-Frame-Options: ALLOW-FROM contextis.co.uk

Please note that the ALLOW-FROM directive of the X-Frame-Options header expects exactly one domain. No protocol, no port, no path and no wildcard.

STRICT-TRANSPORT-SECURITY

This header, often abbreviated as HSTS (HTTP Strict Transport Security), tells the browser to enforce an HTTPS connection whenever a user tries to reach the site sending this header.  All major browsers support this feature, and should:

  1. Only connect to the site via HTTPS
  2. Convert all HTTP references on the site (e.g. JavaScript includes) to HTTPS and
  3. Refuse to load the website in case of errors with the SSL certificate (e.g. Certificate expired, broken certificate chain, …)

It is important to notice that as this header can only be set via an HTTPS response, the user therefore needs to connect to the site at least once via HTTPS, unless you make some special preparations, more on that in a few sentences. It is also important to note that the header is only valid for a certain amount of time: the lifetime is specified in seconds. Context recommends the following setting, which tells the browser to obey the STS-Setting for half a year:

Strict-Transport-Security: max-age=15768000

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Strict-Transport-Security: max-age=15768000; includeSubDomains

Some browsers (at least Chrome, Firefox, IE11/Edge and Safari) ship with a “preload list”, a list of URLs that have explicitly declared that they want to use HSTS. If the users tries to access a listed URL, then the browser automatically enforces the HSTS rule, even for the very first connection, that otherwise would have been vulnerable to a man-in-the-middle attack. To enter your own websites to this preload list you have to submit the URL on this page, and append the preload directive to the header, e.g.:

Strict-Transport-Security: max-age=15768000; includeSubDomains; preload

X-XSS-PROTECTION

This header is surrounded by a little controversy and different people recommend different settings, some even recommend to explicitly disabling it. So what is the deal with this header?

The purpose of this header is to instruct the web browser to utilize its Cross-Site Scripting protection, if present (X-XSS-Protection: 1). Currently only Chrome, Internet Explorer and Safari have such an engine built-in and understand this header (Firefox seems to rely on the third party addon NoScript).

It might seem like a good idea to try and filter malicious requests where the attack happens, at the browser, but filtering is very hard. Especially when one tries to heuristically detect malicious code, sanitize it and at the same time try to maintain a working site. This lead to several filter bypasses and even introduced Cross-Site Scripting vulnerabilities on previously healthy sites.

Once it became apparent that a building a heuristic filter that tries to sanitize unknown code is a Sisyphus task, a new all or nothing approach was invented: X-XSS-Protection: 1; mode=block. If this mode is set the browser is instructed not to render the page at all, but instead displays an empty page (about:blank). But even that approach had flaws in its early implementations, leading some major sites (such as facebook.com, live.com and slack.com) to explicitly disable the XSS filter (X-XSS-Protection: 0).

So while it is difficult to give a definitive recommendation for this header, it seems that the variant ‘X-XSS-Protection: 1; mode=block’ has matured rather well and has outgrown its early flaws. Besides that, the best protection against Cross-Site Scripting is still sanitizing all your in- and output ;-).

To explicitly enable the filter that tries to sanitize malicious input set the following header:

X-XSS-Protection: 1;

To use the all-or-nothing approach that blocks a site when malicious input is detected set the following header:

X-XSS-Protection: 1; mode=block

Additional one can set a parameter ‘report’ that contains an URL. If one of the Webkit-XSS-Auditors (Chrome, Safari) encounters an attack it will send a POST message to this URL, containing details about the incident:

X-XSS-Protection: 1; mode=block; report=https://domain.tld/folder/file.ext

X-CONTENT-TYPE-OPTIONS

This header can be used to prevent certain versions of Internet Explorer from ‘sniffing’ the MIME type of a page. It is a feature of Internet Explorer to interpret sites of ‘Content-Type: text/plaintext’ as HTML when it contains HTML-tags. This however introduces cross-site scripting risks, when one has to deal with user provided content. The X-Content-Type-Options knows only one option – ‘nosniff’ – which prevents the browser from trying to sniff the MIME type.

X-Content-Type-Options: nosniff

PUBLIC-KEY-PINS

Public-Key-Pinning, also known as HTTP Public Key Pinning (short HPKP), is still relatively new and is not yet widely used. However it has great security potential as it allows site operators to specify (‘pin’) a valid certificate and rely less on CAs – that in the past have proven to be susceptible to attack (e.g. any CA could create a technically valid and trusted certificate that has not been issued by you) . Similar to HSTS, the browser is then supposed to remember this pin and only accept connections to a site if the certificate pin matches the pin provided by the header. This however means that an unexpected certificate change can leave visitors locked out of the web presence. For this reason it is required to provide a backup certificate pin that can be used when the first one fails. It also must include a max-age attribute, which once again, specifies the lifetime in seconds. Please bear in mind that this is also the potential lockout time for an unaware user.

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000;

Should this rule be extended to cover all subdomains, then the header can be extended by adding the attribute ‘includeSubDomains’:

Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000; includeSubDomains

CONTENT-SECURITY-POLICY (SUPERSEDES X-CONTENT-SECURITY-POLICY AND X-WEBKIT-CSP)

The Content-Security-Policy (short CSP) is a flexible approach to specify which content in a site may be executed and which not. One of the current problems is that the web browser does not know which sources to trust and which not to trust, e.g. is a third-party JavaScript include from apis.google.com good or bad? The only proper solution to this is source whitelisting, where the developer specifies legitimate resource locations. A very basic example on how to allow JavaScript (script-src) from both the local site (‘self’) and apis.google.com:

Content-Security-Policy: script-src 'self' https://apis.google.com

CSP has a few additional keywords that allow for a very granular access control. It is important to notice that CSP is intended as a per-site model so that every site needs an own http-headers set.

Passive Security

The following headers do not actively enable any security related features, but rather have a passive impact on security, typically by revealing more information than necessary. By now it is well established that security by obscurity is a more than questionable security concept if you rely solely on it. However there is little to no gain to leave the cards lying open on the table and provide an attacker with valuable information, just don’t think that this alone would be enough.

COOKIE ATTRIBUTES (HTTPONLY AND SECURE)

Often overlooked are the special attributes that can be associated with cookies, which can drastically reduce the risks of cookie theft.

HttpOnly

The HttpOnly attribute tells the browser to deny JavaScript access to this cookie, making it more difficult to access via cross-site scripting.

Set-Cookie: cookie=xxxxxxxxx; HttpOnly

Secure

The secure attribute tells the browser to send this cookie only over an HTTPS connection. This should be the norm for all session and authentication related cookies, as it prevents easy intercepting via an unencrypted HTTP connection.

Set-Cookie: cookie=xxxxxxxxx; Secure

Of course these attributes can be combined:

Set-Cookie: cookie=xxxxxxxxx; HttpOnly; Secure

SERVER / X-POWERED-BY

Both of these headers advertise the server software in use and its version number. While these headers might be nice for debugging purposes they do not contribute to the user experience in any way and should either be omitted entirely or reduced to an amount that does not leak any version details.

CACHING DIRECTIVES

Another issue that is often overlooked is the caching of sensitive information by the browser. A browser frequently stores elements of a website to a local cache to speed up the browsing experience. While this behaviour is fine for non-sensitive sites and elements like graphics and stylesheet information it has to be avoided for sensitive information (e.g. pages from an authenticated area of a web application). This problem gets worse in a shared computing environment (e.g. office, school, internet café …), where other users can easily access your browser cache. To tell the browser (and possible intermediate caches such as proxies) not to store anything in its cache one should use the following directives:

Cache-Control: no-cache, no-store
Expires: 0
Pragma: no-cache

It is important to notice that the often encountered directive “Cache-Control: private” cannot be considered secure in a shared computing environment, as it allows the browser to store these elements in its cache.

ETag

The “Entity Tag” (short ETag) header is used for caching purposes. The server uses a special algorithm to calculate an individual ETag for every revision of a file it serves. The browser is then able to ask the server if the file is still available under this ETag. If it is, the server responds with a short HTTP 304 status telling the browser to use the locally cached version, otherwise it sends the full resource as part of an HTTP 200 status.

While this is a useful header you’ll sometimes find a reference to it in vulnerability related articles or reports. The problem is that certain versions of Apache (versions before 2.3.14) used to disclose the inode for the file that is being served in their default configuration. The inode can be used for further attacks, e.g. via the Network File System (NFS), that uses these inodes to create file handles. The problematic default configuration has been corrected in more recent Apache versions, but you should nonetheless make sure that your corresponding FileETag setting in httpd.conf does not contain the INode attribute. The following line is fine:

FileETag MTime Size

The following line is NOT: 

FileETag INode MTime Size

X-ROBOTS-TAG AND ROBOTS.TXT

The X-Robots-Tag header can be used to give search engines, which support this header, directives on how a page or file should be indexed. The advantage of the X-Robots-Tag over a single robots.txt file or the robots-meta-tag is that this header can be set and configured globally, can be adjusted to a very granular and flexible level (e.g. a regular expression that matches certain URLs). Sending a meta-tag with a media file – not possible. Sending an HTTP-header with a media file – no problem. It also has the advantage of disclosing information on a per request basis instead of a single file. Just think about the secret directories that you don’t want anyone to know about: Listing them in a robots.txt file with a disallow entry? Probably a bad idea since this lets everyone immediately know what you want to hide – you might as well just put a link on your website.

So should you ditch the robots.txt altogether and rely solely on the X-Robots-Tag? Probably not, instead combine them for the greatest compatibility. However keep in mind that the robots.txt file should only contain files and directories that you want to be indexed. You should neverlist files that you want to block, instead place a general disallow entry in the robots.txt:

An example to block everything:

User-Agent: *
Disallow: /

An example that tells the crawler to index everything under /Public/ but not the rest:

User-Agent: *
Allow: /Public/
Disallow: /

ADDING CUSTOM HEADERS IN VARIOUS HTTP SERVERS

Below you can find a general example on how to set a static custom HTTP header in different HTTP server software, as well as links to a more in-depth manual for setting more complex header rules.

Apache

For apache it is recommended to use the apache module ‘mod_headers’ to control http headers. The directive ‘Header’ can be placed almost anywhere in the configuration file, e.g.:

<Directory "/var/www/dir1">
    Options +Indexes
    Header set X-XSS-Protection “1; mode=block”
</Directory>

For a more detailed guide on how to set HTTP headers with mod_headers please refer to this page.

Internet Information Services (IIS)

For IIS there are two ways to set custom headers:

1. Via command line:

appcmd set config /section:httpProtocol /+customHeaders.[name='X-XSS-Protection',value='1; mode=block']

2. Via graphical interface Open IIS Manager and use the Connections pane to find the appropriate level you want to enable the header for. In the Home pane, double-click on ‘HTTP Response Headers’. Now look for the Actions pane and click on ‘Add…’ and set both the name and the value for the header you want to set. In our example the name would be ‘X-XSS-Protection’ and the value would be ‘1; mode=block’.

For a more detailed guide on how to set HTTP headers with IIS please refer to this page.

Lighttpd

For Lighthttpd it is recommended to use the module ‘mod_setenvs’ to control apache headers. The directive ‘setenv.add-response-header’ can be placed almost anywhere in the configuration file, e.g.:

setenv.add-response-header = (
      "X-XSS-Protection" => "1; mode=Block" 
    )

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.

NGINX

For NGINX it is recommended to use the module ‘ngx_http_headers_module’ to control http headers. The directive ‘add_header’ can be placed in the appropriate location in the configuration file, e.g.:

server {
    listen       80;
    server_name  domain1.com www.domain1.com;
    root         html;

    location / {
      add_header X-XSS-Protection 1; mode=Block always
    }
  }

For a more detailed guide on how to set HTTP headers with NGINX please refer to this page.

Summary and Conclusions

We have seen that there are quite a few more or less new HTTP headers that can actively contribute to a web site’s security. We have also seen that there are a few well-established headers that might be worth revisiting to decrease the amount of information that is leaked.

References

Following are a few references for the technically interested reader that wants to get a more in-depth understanding of the different headers as well as HTTP headers in general. Please refer to ‘Adding custom headers in various HTTP servers’ above, if you simply want to know how to activate the various headers in your HTTP server software.

Advertisements

We have all used sites such as bugcrowd.com but did you know there are some companies that offer bug bounties through their own website.

This list will help bug bounty hunters and security researchers to explore different bug bounty programs and responsible disclosure policies.

Company URL
The Atlantic https://www.theatlantic.com/responsible-disclosure-policy/
Rollbar Docs https://docs.rollbar.com/docs/responsible-disclosure-policy
Vulnerability Analysis https://vuls.cert.org/confluence/display/Wiki/Vulnerability+Disclosure+Policy
Ambassador Referral Software https://www.getambassador.com/responsible-disclosure-policy
NN Group https://www.nn-group.com/Footer-Pages/Ethical-hacking-NN-Groups-Responsible-Disclosure-Policy.htm
Octopus Deploy https://octopus.com/security/disclosure
Mimecast https://www.mimecast.com/responsible-disclosure/
Royal IHC https://www.royalihc.com/en/responsible-disclosure-policy
SignUp.com https://signup.com/responsible-disclosure-policy
MailTag https://www.mailtag.io/disclosure-policy
Fox-IT (ENG) https://www.fox-it.com/en/responsible-disclosure-policy/
Kaseya https://www.kaseya.com/legal/vulnerability-disclosure-policy
Vend https://www.vendhq.com/responsible-disclosure-policy
Gallagher Security https://security.gallagher.com/gallagher-responsible-disclosure-policy
Surevine https://www.surevine.com/responsible-disclosure-policy/
IKEA https://www.ikea.com/ms/en_US/responsible-disclosure/index.html
Bunq https://www.bunq.com/en/terms-disclosure
GitLab https://about.gitlab.com/disclosure/
Rocket.Chat https://rocket.chat/docs/contributing/security/responsible-disclosure-policy/
Quantstamp https://quantstamp.com/responsible-disclosure
WeTransfer https://wetransfer.com/legal/disclosure
18F https://18f.gsa.gov/vulnerability-disclosure-policy/
Veracode https://www.veracode.com/responsible-disclosure/responsible-disclosure-policy
Oracle https://www.oracle.com/support/assurance/vulnerability-remediation/disclosure.html
Mattermost https://about.mattermost.com/report-security-issue/
Freshworks Inc. https://www.freshworks.com/security/responsible-disclosure-policy
OV-chipkaart https://www.ov-chipkaart.nl/service-and-contact/responsible-disclosure-policy.htm
ICS-CERT https://ics-cert.us-cert.gov/ICS-CERT-Vulnerability-Disclosure-Policy
Netflix https://help.netflix.com/en/node/6657
RIPE Network https://www.ripe.net/support/contact/responsible-disclosure-policy
Pocketbook https://getpocketbook.com/responsible-disclosure-policy/
Salesforce Trust https://trust.salesforce.com/en/security/responsible-disclosure-policy/
Duo Security https://duo.com/labs/disclosure
EURid https://eurid.eu/nl/other-infomation/eurid-responsible-disclosure-policy/
Oslo Børs https://www.oslobors.no/ob_eng/Oslo-Boers/About-Oslo-Boers/Responsible-Disclosure
Marketo https://documents.marketo.com/legal/notices/responsible-disclosure-policy.pdf
FreshBooks https://www.freshbooks.com/policies/responsible-disclosure
BizMerlinHR https://www.bizmerlin.com/responsible-disclosure-policy
MWR InfoSecurity https://labs.mwrinfosecurity.com/mwr-vulnerability-disclosure-policy
KAYAK https://www.kayak.co.in/security
98point6 https://www.98point6.com/responsible-disclosure-policy/
AlienVault https://www.alienvault.com/documentation/usm-appliance/system-overview/how-to-submit-a-security-issue-to-alienvault.htm
Seafile https://www.seafile.com/en/responsible_disclosure_policy/
LevelUp https://www.thelevelup.com/security-response
BankID https://www.bankid.com/en/disclosure
Orion Health https://orionhealth.com/global/support/responsible-disclosure/
Aptible https://www.aptible.com/legal/responsible-disclosure/
NowSecure https://www.nowsecure.com/company/responsible-disclosure-policy/
Takealot.com https://www.takealot.com/help/responsible-disclosure-policy
Smokescreen https://www.smokescreen.io/responsible-disclosure-policy/
Royal Bank of Scotland https://personal.rbs.co.uk/personal/security-centre/responsible-disclosure.html
Flood IO https://flood.io/security
CERT.LV https://www.cert.lv/en/about-us/responsible-disclosure-policy
 Zero Day Initiative https://www.zerodayinitiative.com/advisories/disclosure_policy/
Geckoboard https://support.geckoboard.com/hc/en-us/articles/115007061468-Responsible-Disclosure-Policy
Internedservices https://www.internedservices.nl/en/responsible-disclosure-policy/
FloydHub https://www.floydhub.com/about/security
Practo https://www.practo.com/company/responsible-disclosure-policy
Zimbra https://wiki.zimbra.com/wiki/Zimbra_Responsible_Disclosure_Policy
Cyber Safety https://www.utwente.nl/en/cyber-safety/responsible/
Port of Rotterdam https://www.portofrotterdam.com/en/responsible-disclosure
Georgia Institute of … http://www.policylibrary.gatech.edu/information-technology/responsible-disclosure-policy
NautaDutilh https://www.nautadutilh.com/nl/responsible-disclosure/
BitSight Technologies https://www.bitsighttech.com/responsible-disclosure
BOSCH https://psirt.bosch.com/en/responsibleDisclosurePolicy.html
CARD.com https://www.card.com/responsible-disclosure-policy
SySS GmbH https://www.syss.de/en/responsible-disclosure-policy/
Mailtrack https://mailtrack.io/en/responsible-vulnerability
Pinterest https://policy.pinterest.com/en/responsible-disclosure-statement
PostNL https://www.postnl.nl/en/responsible-disclosure/
Pellustro https://pellustro.com/responsible-disclosure-policy/
iWelcome https://www.iwelcome.com/responsible-disclosure/
Hacking as a Service https://hackingasaservice.deloitte.nl/Home/ResponsibleDisclosure
N.V. Nederlandse Gasunie https://www.gasunie.nl/en/responsible-disclosure
Hostinger https://www.hostinger.co.uk/responsible-disclosure-policy
SiteGround https://www.siteground.com/blog/responsible-disclosure/
Odoo https://www.odoo.com/security-report
Thumbtack https://help.thumbtack.com/article/responsible-disclosure-policy
ChatShipper http://chatshipper.com/responsible-disclosure-policy/
ServerBiz https://server.biz/en/legal/responsible-disclosure
Palo Alto Networks https://www.paloaltonetworks.com/security-disclosure

  1. wifite
    Link Project: https://github.com/derv82/wifite
    Wifite is for Linux only.Wifite is an automated wireless attack tool.Wifite was designed for use with pentesting distributions of Linux, such as Kali LinuxPentooBackBox; any Linux distributions with wireless drivers patched for injection. The script appears to also operate with Ubuntu 11/10, Debian 6, and Fedora 16.Wifite must be run as root. This is required by the suite of programs it uses. Running downloaded scripts as root is a bad idea. I recommend using the Kali Linux bootable Live CD, a bootable USB stick (for persistent), or a virtual machine. Note that Virtual Machines cannot directly access hardware so a wireless USB dongle would be required.Wifite assumes that you have a wireless card and the appropriate drivers that are patched for injection and promiscuous/monitor mode.
  2. wifiphisher
    Link Project: https://github.com/sophron/wifiphisher
    Wifiphisher is a security tool that performs Wi-Fi automatic association attacks to force wireless clients to unknowingly connect to an attacker-controlled Access Point. It is a rogue Access Point framework that can be used to mount automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares. It can work a social engineering attack tool that unlike other methods it does not include any brute forcing. It is an easy way for obtaining credentials from captive portals and third party login pages (e.g. in social networks) or WPA/WPA2 pre-shared keys.Wifiphisher works on Kali Linux and is licensed under the GPL license.
  3. wifi-pumpkin
    Link Project: https://github.com/P0cL4bs/WiFi-Pumpkin
    Very friendly graphic user interface, good handling, my favorite one is the establishment of phishing wifi attack tools, rich functional interface, ease of use is excellent. Compatibility is also very good. Researcher  is actively update them, we can continue to focus on this fun project
  4. fruitywifi
    Link Project: https://github.com/xtr4nge/FruityWifi
    FruityWifi is an open source tool to audit wireless networks. It allows the user to deploy advanced attacks by directly using the web interface or by sending messages to it.
    Initially the application was created to be used with the Raspberry-Pi, but it can be installed on any Debian based system
  5. mama toolkit
    Link Project: https://github.com/sensepost/mana
    A toolkit for rogue access point (evilAP) attacks first presented at Defcon 22.
    More specifically, it contains the improvements to KARMA attacks we implemented into hostapd, as well as some useful configs for conducting MitM once you’ve managed to get a victim to connect.
  6. 3vilTwinAttacker
    Link Project:https://github.com/wi-fi-analyzer/3vilTwinAttacker
    Much like wifi-pumpkin interface. Has a good graphical interface, the overall experience is very good, good ease of use. Good compatibility. Researcher has hardly been updated.
  7. ghost-phisher
    Link Project: http://tools.kali.org/information-gathering/ghost-phisher
    It has a good graphical interface, but almost no fault tolerance, many options easily confusing, but the overall feeling is still very good use. It can be a key to establish rogue ap, and protect dhcp, dns services interface, easy to launch a variety of middle attack, ease of use is good. Compatible good. Kali has been made official team updated original repo.
  8. fluxion
    Link Project: https://github.com/wi-fi-analyzer/fluxion
    Fluxion is a remake of linset by vk496 with (hopefully) less bugs and more functionality. It’s compatible with the latest release of Kali (rolling). The attack is mostly manual, but experimental versions will automatically handle most functionality from the stable releases.

Happy Hunting

he windows passwords can be accessed in a number of different ways. The most common way would be via accessing the Security Accounts Manager (SAM) file and obtaining the system passwords in their hashed form with a number of different tools. Alternatively passwords can be read from memory which has the added benefit of recovering the passwords in plain text and avoiding the cracking requirement. In order to understand the formats you’ll see when dumping Windows system hashes a brief overview of the different storage formats is required.

Lan Manager (LM) Hashes
Originally windows passwords shorter than 15 characters were stored in the Lan Manager (LM) hash format. Some OSes such as Windows 2000, XP and Server 2003 continue to use these hashes unless disabled. Occasionally an OS like Vista may store the LM hash for backwards compatibility with other systems. Due to numerous reasons this hash is simply terrible. It includes several poor design decisions from Microsoft such as splitting the password into two blocks and allowing each to be cracked independently. Through the use of rainbow tables which will be explained later it’s trivial to crack a password stored in a LM hash regardless of complexity. This hash is then stored with the same password calculated in the NT hash format in the following format: ::::::

An example of a dumped NTLM hash with the LM ant NT component. Administrator:500:611D6F6E763B902934544489FCC9192B:B71ED1E7F2B60ED5A2EDD28379D45C91:::

NT Hashes
Newer Windows operating systems use the NT hash. In simple terms there is no significant weakness in this hash that sets it apart from any other cryptographic hash function. Cracking methods such as brute force, rainbow tables or word lists are required to recover the password if it’s only stored in the NT format.

An example of a dumped NTLM hash with only the NT component (as seen on newer systems.
Administrator:500:NO PASSWORD*********************:EC054D40119570A46634350291AF0F72:::

It’s worth noting the “no password” string is variable based on the tool. Others may present this information as padded zeros, or commonly you may see the string “AAD3B435B51404EEAAD3B435B51404EE” in place of no password. This signifies that the LM hash is empty and not stored.

Location
The hashes are located in the Windows\System32\config directory using both the SAM and SYSTEM files. In addition it’s also located in the registry file HKEY_LOCAL_MACHINE\SAM which cannot be accessed during run time. Finally backup copies can be often found in Windows\Repair.

Tool – PwDump7 – http://www.tarasco.org/security/pwdump_7/
This tool can be executed on the system machine to recover the system hashes. Simply download the run the binary with at least administrator account privileges.

Tool – Windows Credential Editor – http://www.ampliasecurity.com/
Windows Credentials Editor (WCE) is great for dumping passwords that are in memory. Personally I typically use it with the -w flag to dump passwords in clear text. This can often net you passwords that are infeasible to get any other way.

Tool – Meterpreter
If you have a meterpreter shell on the system, often you can get the hashes by calling the hashdump command.

Method – Recovery Directory
Occasionally you may not have direct access to the file required, or perhaps even command line interaction with the victim. An example of this would be a local file inclusion attack on a web service. In those cases it’s recommended you try and recover the SYSTEM and SAM directories located in the Windows\Repair directory.

Method – Live CD
Sometimes you may have physical access to the computer but wish to dump the passwords for cracking later. Using a Live CD is a common method of being able to mount the Windows drive and recover the SYSTEM and SAM files from the System32/config directory since the OS isn’t preventing you access.

 

tv crime2

1. ShowBox

Showbox is an app that has been around for quite some time and it seems like everybody has heard of it. Showbox is a solid android app because of the user interface that it provides, and because of how simple and easy it is to use. Not only that but it also brings the option of well known movies and movies you can find in theatres.

Download ShowBox here

2. Videoder

Videoder is an android app that allows you to download youtube videos.Videoder also gives you an option to download a youtube video as an MP3 so you basically download music onto your android.

Download Videoder here

3. FileChef

FileChef allows you to download any file you can think of. This includes: apps, movies, tv shows, mp3 songs, and much more. The interface is very simple, and there is not much of a learning curve.

Download FileChef here

4. RedBox TV

RedBox TV is the newest app that allows you to watch live tv and live sports for free on any android. This app has UK and US channels and it also has channels from around the world. The user interface is very easy to navigate and I highly recommend giving this RedBox TV a try.

Download RedBox TV here

5. AndroDumperr

AndroDumpper is an app that allows you to hack wifi password on any android. This app will work as long as you try the right wifi router, and you’re close to that wifi network.

Download AndroDummper here

Bonus App:

CreeHack

CreeHack is a simple app that also allows you to get in app purchases for free. Simply tap on activate and hit the home button and you are good to go.

Download CreeHack here

 

tv crime2ChaosVPN is a system to connect Hackers.

Design principals include that it should be without Single Point of Failure, make usage of full encryption, use RFC1918 ip ranges, scales well on >100 connected networks and is being able to run on an embedded hardware you will find in our today’s router. It should be designed that no one sees other peoples traffic. It should be mainly autoconfig as in that besides the joining node no administrator of the network should be in the need to actually do something when a node joins or leaves. If you want to find a solution for a Network without Single Point of failure, has – due to Voice over IP – low latency and that no one will see other peoples traffic you end up pretty quick with a full mesh based network.

Therefore we came up with the tinc solution. tinc does a fully meshed peer to peer network and it defines endpoints and not tunnels.

ChaosVPN connects hacker wherever they are. We connect road warriors with their notebook. Servers, even virtual ones in Datacenters, Hacker houses, and hackerspaces. To sum it up we connect networks – may be down to a small /32.

So there we are. It is working and it seems the usage increases, more nodes join in and more services pop up.

Installation

  • Installation dependency package

    If you get an “E: The package bison is not available for the candidate” error, please add them to your sources.list file
    deb http://debian.sdinet.de/ stable chaosvpn
    deb-src http://debian.sdinet.de/ stable chaosvpn
    apt-get update

  • Install
    apt-get install chaosvpn
    If the error cannot be installed
    vi /etc/apt/sources.list
    deb http://security.debian.org/debian-security wheezy/updates main
    apt-get update
    apt-get install libssl1.0.0
    apt-get install chaosvpn

Configuration

  • For tinc and chaosvpn docking operation
    mkdir -p /etc/tinc/chaos
    tincd –ne=chaosvpn –generate-keys=2048
    if you get “Error opening file `/etc/tinc/=chaosvpn/rsa_key.priv’: No such file or directory” error, then run a command:
    mkdir /etc/tinc/chaos/ecdsa_key.priv
  •  executed
    tincd –ne=chaosvpn –generate-keys=2048
  • run command
    vi /etc/tinc/chaosvpn.conf
    Change parameters
    $ my_vpn_ip = 172.31。。[1-255]
    Only use a-z, 0-9 and underline
    Ip address to be changed to 172.31.x.x
    Save the exit.
  • you have to join chaosVPN also must write a letter of introduction to indicate your motive, send mail to chaosvpn-join@hamburg.ccc.de
  • If you join, in the terminal input chaosvpn, you can see some information.

    The contents of the letter of introduction are:

  • Start
    /etc/init.d/chaosvpn start
  • View the chaosvpn network port
    route -n

 

Open Elasticsearch nodes on Shodan

Posted: 06/01/2018 in Uncategorized
Tags: , , , ,

Administrators like to use Elasticsearch (What is Elasticsearch?) as a real-time data search and analysis tool. However lots of administrators forget to secure these nodes.

With a simple search on shodan, we can find the Elastic indices :

https://www.shodan.io/search?query=port:”9200″ product:”Elastic”

Confidential information can be accessed through these addresses, below is the syntax to use:

http://IP:9200/_search?pretty

Here are some basic recommendations for securing your nodes :

  • Only allow direct access to known IP addresses (Source to destination)
  • Add Authentication to Elastic Node (2FA all the way)

PoC

  1. Use this filter on shodan to search elastic node : port:”9200″ product:”Elastic”
  2. Check Elastic connection : http://IP:9200
  3. Executing Search : http://IP:9200/_search?pretty

This Node disclose some confidential information, we can use it to access to all accounts

Now we can use this information to access the Elastic backend

After contact the company has now secured their node.

For help security Elasticsearch watch the video on link below:

https://www.elastic.co/elasticon/conf/2016/sf/securing-elasticsearch

Also see Amazon Elasticsearch Service (Amazon ES) Developer Guide