Setup an FTP user account minus shells

It's important to give to your strictly FTP users no real shell account on the Linux system. In this manner, if for any reasons someone could successfully get out of the FTP chrooted environment, it would not have the possibility of executing any user tasks since it doesn't have a bash shell. First, create new users for this purpose;

These users will be the users allowed to connect to your FTP server.
This has to be separate from a regular user account with unlimited access because of how the chroot environment works. Chroot makes it appear from the user's perspective as if the level of the file system you've placed them in is the top level of the file system.

Use the following command to create users in the /etc/passwd file. This step must be done for each additional new user you allow to access your FTP server.

        [root@deep ] /# mkdir /home/ftp
[root@deep ] /# useradd -d /home/ftp/ftpadmin/ -s /dev/null ftpadmin > /dev/null 2>&1
[root@deep ] /# passwd ftpadmin

Changing password for user ftpadmin
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

  • The mkdir command will create the ftp directory under the /home directory to handle all FTP users' home directories we'll have on the server.

  • The useradd command will add the new user named ftpadmin to our Linux server.

  • Finally, the passwd command will set the password for this user ftpadmin.

Once the home/ftp/ directory has been created you don't have to use this command again for additional FTP users.
  1. Edit the /etc/shells file, vi /etc/shells and add a non-existent shell name like null, for example. This fake shell will limit access on the system for FTP users.

              [root@deep ] /# vi /etc/shells

    /bin/bash
    /bin/sh
    /bin/ash
    /bin/bsh
    /bin/tcsh
    /bin/csh
    /dev/null
    /dev/null, This is our added no-existent shell. With Red Hat Linux, a special device name /dev/null exists for purposes such as these.

  2. Now, edit your /etc/passwd file and add manually the /./ line to divide the /home/ftp directory with the /ftpadmin directory where the user ftpadmin should be automatically chdir'd to. This step must be done for each FTP user you add to your passwd file.

              ftpadmin:x:502:502::/home/ftp/ftpadmin/:/dev/null
    To read:
              ftpadmin:x:502:502::/home/ftp/./ftpadmin/:/dev/null
    ^^
    The account is ftpadmin, but you'll notice the path to the home directory is a bit odd. The first part /home/ftp/ indicates the filesystem that should be considered their new root directory. The dot . divides that from the directory they should be automatically chdir'd. change directory'd into, /ftpadmin/.

Once again, the /dev/null part disables their login as a regular user. With this modification, the user ftpadmin now has a fake shell instead of a real shell resulting in properly limited access on the system.



http://www.faqs.org/docs/securing/chap29sec295.html

This morning I woke up and found that my site was gone. I'm not just talking about the server being down here - no, the site (or better, my blog's contents) was gone. I was getting database errors left and right (perhaps that means I need to do some better error catching in xBlog, eh?), so I logged into phpMyAdmin. Or I tried anyway. I wasn't able to log in, so I logged into cPanel for this domain. I went to the MySQL database section, and much to my dismay, none of my databases showed up. At this point, I officially freaked out. I immediately went to the support center for my hosting, and saw that someone else was having the same problems. There was hope, at least it wasn't just my databases. One of the server administrators came on, and was simply able to restart the SQL server to fix the problem. Thankfully, all my data was still intact.
I did, however, learn a most important lesson (one that I should have already known). Backup you data. Since I didn't want to have to manually go into phpMyAdmin and backup the database for xBlog each day, I started looking at what I'd have to do to write a PHP script to do this. A bit of research showed me that the easiest way would be to use mysqldump, with a shell command.
So, I rolled up my sleeves and wrote the following PHP function to make automatically backing up MySQL databases an easy thing.
/* Function to backup a mysql database table
* @param host
* The database host name (server)
* @param user
* Database user
* @param password
* Database password
* @param table
* Database table name
* @param backup_path
* The path to backup directory
* @param backup_name
* The name of the backup file - no extension
*/
function mysql_backup($host, $user, $password, $table, $backup_path, $backup_name) {
$day = date('w');
# gzip (.gz)
$backup = $backup_path.$backup_name.'_'.$day.'.gz';
exec(sprintf('mysqldump --host=%s --user=%s --password=%s %s --quick --lock-tables --add-drop-table gzip > %s', $host, $user, $password, $table, $backup));
# sql (.sql)
//$backup = $backup_path.$backup_name.'_'.$day.'.sql';
//exec(sprintf('mysqldump --host=%s --user=%s --password=%s %s --quick --lock-tables --add-drop-table > %s', $host, $user, $password, $table, $backup));
}
mysql_backup('localhost', 'user', 'password', 'xblogpro', '/root/path/to/backups/', 'backup');
exec("/usr/bin/lynx http://xxx.xxx.xxx.xxx/backups/mysql_backup_grabber.php");
?>
The parameters that are passed are explained in the script, so I won't go into detail on what those are again. For more information on the various options I passed to the mysqldump command, take a look at the documentation. The exec function runs a system command that will tell the Lynx browser to run the PHP script that is on my local server, which is where I wanted to back the files up to. I chose to use this, as I really wouldn't have felt all that safe with my backups sitting on the same server as the MySQL server. If you aren't in need of such things, simply remove the exec function to only backup the data to the regular server.
You'll see that I have two different exec commands. The one that is uncommented is the one that I'm using. It will gzip the files right away. If you'd rather just export a .sql file, simply comment out the gzip lines, and uncomment the sql lines.
I chose to append the day of the week to each backup, so at any given point, I'll have the last 7 days backed up. I know the chances of something happening while the script is in the process of backing up the data is rather slim, but I'd rather not risk it.
The contents of mysql_content_grabber.php is very straight forward. I simply copy the file from the remote sever to my local server. Here it is (it must run on the local server, obviously):
$day = date('w', time()+4000);
copy(http://www.anuragbhatia.org/backups/backup_$day.gz,
$_SERVER['DOCUMENT_ROOT']."/backups/backup_$day.gz");
?>
Since I am located in CST, and the server is in EST, I added a little over an hour to the time on this script. You may need to adjust for the timezone difference between servers. Why a little over an hour? 'cause my local computer's clock and the remote server's clocks are probably not exactly synchronized. ;)
Finally, I set up a cron job to run the script every night at midnight. The cron looks something like this:0 0 * * * php /root/path/to/backups/mysql_backup.php
There you have it. A quick and dirty way to automatically backup your MySQL databases with PHP. Hope it'll save a headache or two.

Edit: A small update was made to the script. The original used a header redirect to call the script that is running on my local host, however, that was not working with the cron job. I now use a system command to call the Lynx browser, which works much better.

COUNTER