So, likely as not, you want to do this on either Linux or Windows. But in case you want to do it on both (and it's fun to!) we'll look at both.
Writing a bash script (or any shell script) is as easy as writing the commands you want to run line by line as if you are running them line by line! There's some extra stuff you can optionally do to get in some extra flexibility too!
There are two ways to do this: let's look at the easy way by chaining commands.
The easiest way to do things is to run different commands line-by-line. If you want to send the output of one command to another, you use the pipe |.
cd /home/eric/wordlist/
cat list_of_usernames.txt | grep 'jones'
Now for the code above, we are trying to go to a folder, and get all the usernames with 'jones' as part of the username. But there are some issues here. For one, if we know the username file is always in that folder, we can just get it out of there straightaway.
cat /home/eric/wordlist/list_of_usernames.txt | grep 'jones'
Now, this here is what is nicknamed 'cat abuse'. We don't really need to cat out the file.
user@mycomputer $ grep Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information.
So we see we can actually pass the file straight to grep.
grep 'jones' /home/eric/wordlist/list_of_usernames.txt
But this is kind of hard to edit. You'd have to go into the line to change what you are looking for every time. And if you are referencing it multiple times, it is really annoying. For example, what if we want to print out what we are searching for every time like so?
cat 'Searching for jones in /home/eric/wordlist/list_of_usernames.txt'
grep 'jones' /home/eric/wordlist/list_of_usernames.txt
Try and edit that script to load a different list and search for a different name. (It doesn't have to exist!)
Really annoying, right? Good news is that we have a solution: variables! You can assign variables very easily: variable_name=value. And you can use it with $variable_name.
Just note one thing: you can only do that with stuff in double quotes, not single quotes. A good rule of thumb when writing bash scripts is to use double-quotes if you really want to include something like a variable, and single-quotes otherwise. Also, don't put spaces on either side of the equal sign!
Let's rewrite this to use variables:
searching_for='jones'
file='/home/eric/wordlist/list_of_usernames.txt'
cat "Searching for $jones in $searching_for"
grep $searching_for $file
Much better! But it's still not perfect. Do you really want to edit the script file every single time? That's so slow. You'd be better off just typing the command every time.
A bash function is written like this:
function_name () {
echo "I was called with $1"
}
user@mycomputer $ function_name "cheep-cheep" "I was called with cheep-cheep"
You put in all the code you want to run inside the curly braces. And you can access anything passed to your function with $1, $2, $3, etc. There's also $#, which is how many arguments (that's what those are called) were passed to the function.
So let's put this together. We always want to give our function a pattern, but we may not always want to give it a file: we could have a default one, but if we are given one, we can use the one we are given.
function pattern_search () {
file='/home/eric/wordlist/list_of_usernames.txt'
if [ "$#" -eq 2 ]
then
file=$2
fi
echo "Searching for $1 in $file"
grep "$1" "$file"
}
So, what's going on here? We first check if we've got two arguments. If we do, then we replace file with the second argument. In the end, we run the command.
Two things: what is that if construct? Even if you are used to them in other languages, the syntax can be a bit weirder in bash. I'd recommend reading this and this for an explanation.
The other thing to note is that now when running our grep command we are surrounding the arguments with quotes. Read more about that here.
Let's put it all together. We can run this with bash file.bash, but if we add a line to the top of our file, we can run it with just `./file.bash` and Linux will know what to do.
#!/bin/bash
function pattern_search () {
file='/home/eric/wordlist/list_of_usernames.txt'
if [ "$#" -eq 2 ]
then
file=$2
fi
echo "Searching for $1 in $file"
grep "$1" "$file"
}
Note that if you want this function to be available all the time, you can just put the function into your .bashrc file.
Note that if you want to check your code, a useful site is ShellCheck.
However, while you can chain together commands and maybe find something useful, there is another easier way, if you know the right command.
The find command is a useful command to find files with. You have to give it a starting place, and some criteria.
user@mycomputer $ find /home/kurt/ -iname 'username'
All the flags and arguments to find are optional. By default it starts searching from the present directory. Note that you can also give it a maxdepth argument to determine how deep into folders it goes. Use find --help to read up on all its many options.
What makes find very useful for our purposes is that it has two arguments that are very handy. One is -mtime. You can give this argument a number with a + or - in front of it. So -mtime +30 will search for files over 30 days old. I'm sure you can see why this is very handy.
The other, and be careful with this one, is -exec. This will run a shell command for every file found. Do not use rm when playing with this! Use ls instead to see what files would be deleted by your command. Even when you use rm you can pass in the -i flag to have confirmation before each file is deleted.
To use this argument, simply write the command you want to run after it, with {} going where you want the filename to go, and end the command with \; to both end the command and make sure it's not ending the find command. For example,
user@mycomputer $ find -mtime -30 -type f -exec wc -l {} \;
This will look for all files (that's the -type f is doing) that are less than 30 days old and give us the number of lines in them using wc.
So combine this with your knowledge of shell scripting and writing functions above, and you have all you need to write the script you want.
Alright, what about Windows? PowerShell is my go-to way on Windows. It works in many of the same ways as bash. You can pipe the output of one command to the next with the | (pipe) operator.
We'll get to how to save and run PowerShell scripts in a little bit. One thing we have to note is there is no exact match for the find command. But on the other hand the syntax is so much clearer (in my opinion!)
So let's look at the pieces:
Pipe these commands together, and you have everything you need.
Using this, in a good and proper way, is a bit more involved. What I'd recommend is turning your script into a module. It's not hard.
Let's first make sure that PowerShell is setup to run our scripts. Run the following to change the execution policy.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Read up on execution policies, but the gist is that we are setting it so that remote scripts have to be signed, but we can run any of our local scripts without signing them.
The next thing to do is to check the PowerShell folder (for the newer versions of PowerShell). If you don't have one in your Documents folder, make it. Then make sure there is a Microsoft.PowerShell_profile.ps1 file in it. Also make sure there is a powershell.config.json file too, with the value {"Microsoft.PowerShell:ExecutionPolicy":"RemoteSigned"} in it.
You can paste the following lines into PowerShell to check it's all there:
Test-Path "$env:USERPROFILE\Documents\PowerShell"
Test-Path "$env:USERPROFILE\Documents\PowerShell\powershell.config.json"
Test-Path "$env:USERPROFILE\Documents\PowerShell\Microsoft.PowerShell_profile.ps1"
(Get-Content "$env:USERPROFILE\Documents\PowerShell\powershell.config.json") -eq '{"Microsoft.PowerShell:ExecutionPolicy":"RemoteSigned"}'
Now you're ready to make your first module. In the Modules directory in the PowerShell directory, make a folder. Name it whatever you want your module to be called. In that folder, make a .psm1 file called the same name as your folder. That file is your module file.
function My-Module {
put your code in here
}
Export-ModuleMember -Function My-Module
Make sure that the name in the last line where the function is exported is the same as your function name. At the same time, this is a terrible name and PowerShell will complain at you. The right way to name a module is as Verb-Noun. The noun can be whatever you want — File, Stuff, etc. The verb, on the other hand, should come from a list of approved verbs. You can see it by running Get-Verb in a PowerShell window.
When you are done, open up the Microsoft.PowerShell_profile.ps1 file and add your module: just add the line Import-Module MyModule. Make sure you use the folder name, and not your function name.
If you've done everything right, you can open a new PowerShell window and punch in the name of your function and run it.
P. S. PowerShell functions are just as powerful as bash functions, but learning to use them is left as an exercise for the reader. The documentation is pretty straightforward, so you'll be fine!