The authentication section was a lot more comfortable. Lots of enumeration and brute forcing – as should be expected for a section on authentication. And I’ve got more practice in this area, so it was a little more familiar. It probably also helped to have knocked the dust off the web app pen testing stuff a bit with the SQLi section.
Labs were solid. Scripting was good for me to work with sessions and passing cookies and CSRF tokens (my scripts are here – link is to the repo in case I get a wild hair and reorganize). There were also some different techniques than I’ve used previously – passing multiple passwords in a single attempt was interesting. These labs build really well from one to the next in terms of difficulty and expanding on techniques. There were a lot of things that were refreshers about looking for vulnerabilities in authentication – especially things like using really long passwords to help test for time response differences. There were some reminders that you shouldn’t just discount obvious bypasses, like the 2FA simple bypass lab. Testing some common usernames and ones that are very unlikely to exist is a good way to look for response differences. There are enough username and password lists available just on a basic Kali distro to enumerate quite a bit and see what you can see.
Takeaways for me were:
- Try the obvious stuff like going directly to the desired URL
- Click and fill in all the things with different types of input (long/short, valid/invalid, etc.)
- Test likely and highly unlikely input
- Make an account if possible to explore more functionality
- Carefully track the authentication flow to craft effective exploits
- Combination of Burp and Python works well (for me)
I found Python to be more effective for some things than Burp. For instance, the 2FA brute force was a bit better for me using Python because I set an alarm to beep when the correct code was found. I think my Python script and Burp Pro were similar in terms of time – it just kind of depends on when the brute-force and 2FA code match. It was also convenient to set a script to work on enumeration while exploring other things. That could be done with Burp (Pro) or Zap as well, but having the scripts to target specific things was helpful. In a pen testing situation, I can see the utility of kicking off a few things to run while I’m doing other things. I didn’t put a ton of focus into making the scripts completely generic, but the practice of building from basic enumeration with different responses to the 2FA brute force gave me some good ideas about how I might take a generic script and build it out while targeting a specific app.
My general framework for looking for authentication vulnerabilities manually:
- Carefully note responses on different attempts
- Attempt login with known incorrect information
- Attempt login with some likely usernames
- Test different password lengths to look for potential timing differences and password limitations
- If blocks or rate limiting are in place, experiment with options to mask IP (like
- Carefully examine how data is passed to see how what is submitted can be submitted
- Is it JSON?
- What needs to be passed?
- What can be manipulated?
- Look for ways that other vulnerabilities can be combined to compromise authentication mechanisms
- Ex: XSS and
- Ex: XSS and
- If you can make an account, try different password lengths and strengths to see what complexity requirements exist
- Is the username an email or something else?
Reviewing/getting familiar with HTTP headers is also critical for web app pen testing. I think this reference from Mozilla is solid and easy to understand.
One of the things I think about a lot when working through labs is “how can this be translated to the real world?”. I think overall the PortSwigger labs are fairly easy to look at and see how an individual lab could be used. And the way that the labs scaffold can help build a roadmap for pen testing. One of the things I’m trying to do (with all of the labs not just authentication) is pay attention to see when the different techniques might be useful (never testing without permission of course) as I’m on different sites/applications. I think asking yourself “what would I do next?” as you go through each lab is also helpful. If we think of it in terms of the typical chain, the next step after gaining additional access is enumeration – so how would you do that? It can be easy when doing labs to get so focused on the solution that you lose the application part.
I also would not necessarily recommend scripting the labs (or all of the labs) to everyone. I’m doing so with a very specific purpose in mind – I want to get better with Python. Is there some utility in terms of potential use for pen testing? Yes, for me perhaps with some bug bounty programs and for someone with a more offensive focus for general usage. Is it forcing me to make sure I understand what’s going on? Yes because scripting it doesn’t let me just point and click through. But if you are already comfortable with Python (or don’t have a need to improve that skillset), I think your time is better spent focusing on the techniques and content within BurpSuite than doing the extra work on scripting.
Something else I want to mention – my write-ups here are quite informal. I try to not slip too much into technical writing because that’s not my intent here. But if you need to work on your report writing (which is probably all of us), consider presenting your labs as you would a pen test report. Writing for different audiences was something a lot of my students struggled with, and I think taking the time to practice is valuable. Depending on your goals, you may want to consider building out examples of several types of report – executive, technical, etc. to demonstrate how you would change your approach for different stakeholders. Those can also demonstrate some potential/experience when you are job hunting.