sftproo can be deployed using Docker, which is the recommended method for most users. This guide will walk you through the installation process.
1. Create a directory for sftproo:
mkdir /opt/sftproo
cd /opt/sftproo
2. Create a docker-compose.yml
file:
version: '3.8'
services:
sftproo:
image: sftproo/sftproo:latest
container_name: sftproo
ports:
- "2222:2222" # SFTP port
- "8443:8443" # Web UI port
volumes:
- ./data:/data
- ./config:/config
- ./backups:/backups
environment:
- SFTPROO_DB_HOST=mariadb
- SFTPROO_DB_NAME=sftproo
- SFTPROO_DB_USER=sftproo
- SFTPROO_DB_PASSWORD=changeme
- SFTPROO_SESSION_SECRET=changemetosomethingrandom
depends_on:
- mariadb
restart: unless-stopped
mariadb:
image: mariadb:10.11
container_name: sftproo-db
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
- MYSQL_DATABASE=sftproo
- MYSQL_USER=sftproo
- MYSQL_PASSWORD=changeme
volumes:
- ./database:/var/lib/mysql
restart: unless-stopped
3. Start the services:
docker-compose up -d
4. Access the web interface at https://your-server:8443
sftproo is available as a pre-configured AMI in the AWS Marketplace:
sudo sftproo-setup
to complete configurationDeploy sftproo from the Azure Marketplace:
sudo sftproo-setup
After installation, access the web interface and log in with the default credentials:
admin
changeme
sftproo can be configured using environment variables:
Variable | Description | Default |
---|---|---|
SFTPROO_DB_HOST |
Database host | localhost |
SFTPROO_DB_PORT |
Database port | 3306 |
SFTPROO_DB_NAME |
Database name | sftproo |
SFTPROO_DB_USER |
Database user | sftproo |
SFTPROO_DB_PASSWORD |
Database password | - |
SFTPROO_STORAGE_PATH |
Path for user file storage | /data |
SFTPROO_BACKUP_PATH |
Path for backups | /backups |
SFTPROO_SESSION_SECRET |
Session encryption key | - |
SFTPROO_SFTP_PORT |
SFTP server port | 2222 |
SFTPROO_HTTP_PORT |
Web interface port | 8443 |
sftproo uses self-signed certificates by default. For production, you should use proper SSL certificates:
Option 1: Let's Encrypt (Recommended)
# Using Traefik (included in docker-compose)
services:
traefik:
image: traefik:v2.10
command:
- "--certificatesresolvers.letsencrypt.acme.email=your@email.com"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
labels:
- "traefik.http.routers.sftproo.tls.certresolver=letsencrypt"
Option 2: Custom Certificates
# Mount your certificates
volumes:
- ./certs/server.crt:/config/sftproo-server.crt
- ./certs/server.key:/config/sftproo-server.pem
By default, sftproo stores files locally. For cloud storage, configure S3:
SFTPROO_STORAGE_TYPE=s3
SFTPROO_S3_BUCKET=your-bucket-name
SFTPROO_S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
To create a new user via the web interface:
Users can authenticate using SSH keys for enhanced security:
Adding SSH Keys:
Set storage quotas and connection limits per user:
Import multiple users via CSV:
# CSV format
username,email,password,role,quota_gb
john.doe,john@example.com,TempPass123!,user,10
jane.smith,jane@example.com,TempPass456!,user,20
Upload the CSV file in Users → Import Users
Configure automatic backups in the Settings panel:
Create an immediate backup:
# Via CLI
docker exec sftproo sftproo-backup create
# Via Web UI
Settings → Backup → Create Backup Now
Each backup includes:
To restore from a backup:
docker exec sftproo sftproo-backup restore backup-2025-01-15.tar.gz
Backups are encrypted by default. To decrypt a backup manually:
# Use the decrypt_backup utility
./decrypt_backup -input backup-2025-01-15.tar.gz.enc -output backup.tar.gz
Restrict access based on IP addresses:
Enable 2FA for admin accounts:
sftproo logs all important activities:
Access logs via Settings → Logs or:
docker logs sftproo
Welcome to sftproo! This guide will help you connect to the SFTP server and start transferring files.
Your administrator should provide you with:
Using the command line:
sftp -P 2222 username@server.example.com
You'll be prompted to accept the server's host key on first connection. Type "yes" to continue.
To change your password via the web interface:
https://server.example.com:8443
Once connected, use these commands to manage files:
Command | Description | Example |
---|---|---|
ls |
List files in current directory | ls -la |
cd |
Change directory | cd documents |
pwd |
Show current directory | pwd |
mkdir |
Create directory | mkdir newfolder |
put |
Upload file | put localfile.txt |
get |
Download file | get remotefile.txt |
rm |
Delete file | rm oldfile.txt |
rmdir |
Delete empty directory | rmdir oldfolder |
rename |
Rename file or directory | rename old.txt new.txt |
Transfer multiple files at once:
# Upload all .pdf files
put *.pdf
# Download all files in a directory
get -r documents/
# Upload entire directory
put -r local_folder/
If a transfer is interrupted, resume it:
# Resume upload
reput largefile.zip
# Resume download
reget largefile.zip
Create a new SSH key pair on your local machine:
On Linux/Mac:
ssh-keygen -t ed25519 -f ~/.ssh/sftproo_key -C "your_email@example.com"
On Windows (PowerShell):
ssh-keygen -t ed25519 -f $HOME\.ssh\sftproo_key -C "your_email@example.com"
Option 1: Via Web Interface
sftproo_key.pub
)Option 2: Ask your administrator to add it for you
Connect using your private key:
sftp -P 2222 -i ~/.ssh/sftproo_key username@server.example.com
For convenience, add your key to the SSH agent:
# Start SSH agent
eval $(ssh-agent)
# Add your key
ssh-add ~/.ssh/sftproo_key
# Now connect without specifying the key
sftp -P 2222 username@server.example.com
While command-line SFTP is powerful, graphical clients can be more user-friendly.
sftp://server.example.com
as hostsftp://username@server:2222
If you get "Connection refused" errors:
If authentication fails:
If you can't access certain directories:
ls -la
put -z file.txt
reput
or reget
If you continue to experience issues:
sftproo provides a RESTful API for programmatic access to user management and system configuration.
https://server.example.com:8443/api/v1
All API requests require authentication using an API key:
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://server.example.com:8443/api/v1/users
All responses are in JSON format:
{
"success": true,
"data": {
// Response data
},
"message": "Operation successful"
}
Errors return appropriate HTTP status codes:
{
"success": false,
"error": {
"code": "USER_NOT_FOUND",
"message": "The requested user does not exist"
}
}
Include the API key in the Authorization header:
# Using curl
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://server.example.com:8443/api/v1/users
# Using Python requests
import requests
headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
}
response = requests.get(
'https://server.example.com:8443/api/v1/users',
headers=headers
)
GET /api/v1/users
Response:
{
"success": true,
"data": {
"users": [
{
"id": 1,
"username": "john.doe",
"email": "john@example.com",
"role": "user",
"created_at": "2025-01-15T10:00:00Z"
}
]
}
}
POST /api/v1/users
Request Body:
{
"username": "jane.doe",
"email": "jane@example.com",
"password": "SecurePass123!",
"role": "user"
}
Response:
{
"success": true,
"data": {
"user": {
"id": 2,
"username": "jane.doe",
"email": "jane@example.com"
}
}
}
PUT /api/v1/users/{id}
Request Body:
{
"email": "newemail@example.com",
"quota_gb": 50
}
DELETE /api/v1/users/{id}
GET /api/v1/users/{id}/ssh-keys
POST /api/v1/users/{id}/ssh-keys
Request Body:
{
"name": "Work Laptop",
"public_key": "ssh-ed25519 AAAAC3NzaC1... user@example.com"
}
GET /api/v1/system/status
Response:
{
"success": true,
"data": {
"version": "1.0.19",
"uptime": 86400,
"users_online": 5,
"disk_usage": {
"used_gb": 45.2,
"total_gb": 100
}
}
}
POST /api/v1/system/backup
Request Body:
{
"include_user_files": true,
"compress": true
}
GET /api/v1/settings
PUT /api/v1/settings
Request Body:
{
"backup_enabled": true,
"backup_schedule": "0 2 * * *",
"ip_whitelist_enabled": true
}
/api/v1/docs