Tuesday, July 2, 2024
HomeHackerNet App Authorisation Protection Scanning

Net App Authorisation Protection Scanning




Net app authorisation protection scanning.

Introduction

AuthCov crawls your net utility utilizing a Chrome headless browser whereas logged in as a pre-defined consumer. It intercepts and logs API requests in addition to pages loaded throughout the crawling part. Within the subsequent part it logs in below a distinct consumer account, the “intruder”, and makes an attempt to entry every of one of many API requests or pages found beforehand. It repeats this step for every intruder consumer outlined. Lastly it generates an in depth report itemizing the assets found and whether or not or not they’re accessible to the intruder customers.

An instance report generated from scanning an area WordPress occasion:

 

Options

  • Works with single-page-applications and conventional multi-page-applications
  • Handles token-based and cookie-based authentication mechanisms
  • Generates an in-depth report in HTML format
  • Screenshots of every web page crawled might be seen within the report

Set up

Set up the newest node model. Then run:

Utilization

  1. Generate a config for the positioning you need to scan [NOTE: It has to end in .mjs extension]:
$ authcov new myconfig.mjs
  1. Replace the values in myconfig.mjs
  2. Check your configuration values by working this command to make sure the browser is logging in efficiently.
$ authcov test-login myconfig.mjs --headless=false
  1. Crawl your web site:
$ authcov crawl myconfig.mjs
  1. Try intrusion in opposition to the assets found throughout the crawling part:
$ authcov intrude myconfig.mjs
  1. View the generated report at: ./tmp/report/index.html

Configuration

The next choices might be set in your config file:

possibility sort description
baseUrl string The bottom URL of the positioning. That is the place the crawler will begin from.
crawlUser object The consumer to crawl the positioning below. Instance: {"username": "admin", "password": "1234"}
intruders array The customers who will intrude on the api endpoints and pages found throughout the crawling part. Typically these can be customers the identical or much less privilege than the crawlUser. To intrude as a not-logged-in consumer, add a consumer with the username “Public” and password null. Instance: [{"username": "john", "password": "4321"}, {"username": "Public", "password": null}]
sort string Is that this a single-page-application (i.e. javascript frontend which queries an API backend) or a extra “conventional” multi-page-application. (Select "mpa" or "spa").
authenticationType string Does the positioning authenticate customers through the use of the cookies despatched by the browser, or by a token despatched in a request header? For an MPA this may nearly all the time be set to "cookie". In an SPA this might be both "cookie" or "token".
authorisationHeaders array Which request headers are wanted to be despatched with a purpose to authenticate a consumer? If authenticationType=cookie, then this needs to be set to ["cookie"]. If authenticationType=token, then this can be one thing like: ["X-Auth-Token"].
maxDepth integer The utmost depth with which to crawl the positioning. Advocate beginning at 1 after which attempt crawling at increased depths to verify the crawler is ready to end quick sufficient.
verboseOutput boolean Log at a verbose stage, helpful for debugging.
saveResponses boolean Save the response our bodies from API endpoints so you may view them within the report.
saveScreenshots boolean Save browser screenshots for the pages crawled so you may view them within the report.
clickButtons boolean (Experimental characteristic) on every web page crawled, click on all of the buttons on that web page and file any API requests made. May be helpful on websites which have a lot of consumer interactions by means of modals, popups and so on.
xhrTimeout integer How lengthy to attend for XHR requests to finish whereas crawling every web page. (seconds)
pageTimeout integer How lengthy to attend for web page to load whereas crawling. (seconds)
headless boolean Set this to false for the crawler to open a chrome browser so you may see the crawling taking place stay.
unAuthorizedStatusCodes array The HTTP response standing codes that determine whether or not or not an API endpoint or web page are approved for the consumer requesting it. Optionally outline a operate responseIsAuthorised to find out if a request was approved. Instance: [401, 403, 404]
ignoreLinksIncluding array Don’t crawl URLs containing any strings on this array. For instance, if set to ["/logout"] then the url: http://localhost:3000/logout won’t be crawled. Optionally outline a operate ignoreLink(url) beneath to find out if a URL needs to be crawled or not.
ignoreAPIrequestsIncluding array Don’t file API information made to URLs which include any of the the strings on this array. Optionally outline a operate ignoreApiRequest(url) to find out if a request needs to be recorded or not.
ignoreButtonsIncluding array If clickButtons set to true, then don’t click on buttons who’s outer HTML comprises any of the strings on this array. Optionally outline a operate ignoreButton(url) beneath.
loginConfig object Configure how the browser will login to your net app. Optionally outline an async operate loginFunction(web page, username, password). (Extra about this beneath).
cookiesTriggeringPage string (optionally available) when authenticationType=cookie, this may set a web page in order that the intruder will browse to this web page after which seize the cookies from the browser. This may be helpful if the positioning units the trail area on cookies. Defaults to choices.baseUrl.
tokenTriggeringPage string (optionally available) when authenticationType=token, this may set a web page in order that the the intruder will browse to this web page after which seize the authorisationHeaders from the intercepted API requests. This may be helpful if the positioning’s baseUrl doesn’t make any API requests and so can not seize the auth headers from that web page. Defaults to choices.baseUrl.

Configuring the Login

There are two methods to configure the login in your config file:

  1. Utilizing the default login mechanism which makes use of puppeteer to enter the username and password into the desired inputs after which click on the desired submit button. This may be configured by setting the loginConfig possibility in your config file like this. See this instance too.
"loginConfig": {
"url": "http://localhost/login",
"usernameXpath": "enter[name=email]",
"passwordXpath": "enter[name=password]",
"submitXpath": "#login-button"
}
  1. In case your login kind is extra complicated and includes extra consumer interplay then you may outline your individual puppeteer operate in your config file like this. See this instance too.
  "loginFunction": async operate(web page, username, password){
await web page.goto('http://localhost:3001/customers/sign_in');
await web page.waitForSelector('enter[type=email]');
await web page.waitForSelector('enter[type=password]');

await web page.sort('enter[type=email]', username);
await web page.sort('enter[type=password]', password);

await web page.faucet('enter[type=submit]');
await web page.waitFor(500);

return;
}

Do not foget to run the authcov test-login command in headful mode with a purpose to confirm the browser logs in efficiently.

Contributing

Clone the repo and run npm set up. Finest to make use of node model 17.1.0.

Unit Exams

Unit checks:

End2End checks:

First obtain and run the instance app. Then run the checks:



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments