Compare commits

..

1 commit

Author SHA1 Message Date
El RIDO
dc9983ec3e
making 1.2.1 image get all the improvements from master image 2019-12-21 16:58:33 +01:00
32 changed files with 117 additions and 786 deletions

View file

@ -1,10 +1,8 @@
# Docs
README*.md
README.md
# Git
.git/
.github/
buildx.sh
# OSX
.DS_Store

View file

@ -1,12 +0,0 @@
version: 2
updates:
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
# Maintain dependencies for GitHub Actions
# src: https://github.com/marketplace/actions/build-and-push-docker-images#keep-up-to-date-with-github-dependabot
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

23
.github/rules.tsv vendored
View file

@ -1,23 +0,0 @@
# connect-src wildcard is required for the API to work when called from external instances
10055 IGNORE (CSP: Wildcard Directive)
# the image is intended for being used behind a reverse-proxy, so TLS termination is already done
10106 IGNORE (HTTP Only Site)
# the code is open-source, no special information here
10027 IGNORE (Information Disclosure - Suspicious Comments)
40034 IGNORE (.env Information Leak)
# it doesn't seem to like that we configured our nginx to not respond to directory paths
10104 IGNORE (User Agent Fuzzer)
# the supposed timestamps are actually rgba values in hex notation or the fractional part of percentages in CSS files
10096 IGNORE (Timestamp Disclosure - Unix)
# we have no authentication so CSRF is not possible, the detected password form is only used interactively
10202 IGNORE (Absence of Anti-CSRF Tokens)
20012 IGNORE (Anti-CSRF Tokens Check)
# glad we are considered modern
10109 IGNORE (Modern Web Application)
#
#
# false-positives
#
# again we return 200 to some strange URL
90034 IGNORE (Cloud Metadata Potentially Exposed)
40035 IGNORE (Hidden File Found)
Can't render this file because it has a wrong number of fields in line 2.

View file

@ -1,46 +0,0 @@
name: Build & Deploy container image
on:
schedule:
- cron: '0 0 * * *' # everyday at midnight UTC
pull_request:
branches: master
push:
branches: master
tags: '*'
jobs:
buildx:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
base-image: [stable, edge]
destination-image: [nginx-fpm-alpine, fs, gcs, pdo, s3]
name: ${{ matrix.destination-image }} image / ${{ matrix.base-image }} release
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: linux/arm/v6,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
if: ${{ github.event_name != 'pull_request' && (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) }}
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
if: ${{ github.event_name != 'pull_request' && (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) }}
with:
registry: ghcr.io
username: privatebin
password: ${{ github.token }}
- name: Docker Build
run: ./buildx.sh ${{ github.event_name }} ${{ matrix.destination-image }} ${{ matrix.base-image }}

View file

@ -1,53 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: Security-scan
# Controls when the action will run.
on:
schedule:
- cron: '0 3 * * *' # everyday at 03:00 UTC
pull_request:
branches: master
push:
branches: master
tags: '*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: master
# Runs a single command using the runners shell
- name: Pull and start docker
run: docker run -d --read-only -p 8080:8080 privatebin/nginx-fpm-alpine
# Run OWASP scan
- name: OWASP ZAP Full Scan
uses: zaproxy/action-full-scan@v0.12.0
with:
# GitHub Token to create issues in the repository
#token: # optional, default is ${{ github.token }}
# Target URL
target: http://localhost:8080
# Relative path of the ZAP configuration file
rules_file_name: ".github/rules.tsv" # optional
# The Docker file to be executed
#docker_name: # default is owasp/zap2docker-stable
# Additional command line options
#cmd_options: # optional
# The title for the GitHub issue to be created
#issue_title: # optional, default is ZAP Full Scan Report
# The action status will be set to fail if ZAP identifies any alerts during the full scan
#fail_action: # optional

View file

@ -1,20 +0,0 @@
on:
push:
branches:
- master
pull_request:
branches: master
name: "Shellcheck"
permissions: {}
jobs:
shellcheck:
name: Shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@master

View file

@ -1,40 +0,0 @@
# A sample workflow which checks out the code, builds a container
# image using Docker and scans that image for vulnerabilities using
# Snyk. The results are then uploaded to GitHub Security Code Scanning
#
# For more examples, including how to limit scans to only high-severity
# issues, monitor images for newly disclosed vulnerabilities in Snyk and
# fail PR checks for new vulnerabilities, see https://github.com/snyk/actions/
name: Snyk Container
on:
push:
branches: [ master ]
schedule:
- cron: '23 7 * * 5'
jobs:
snyk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build a Docker image
run: docker build -t privatebin/nginx-fpm-alpine .
- name: Run Snyk to check Docker image for vulnerabilities
# Snyk can be used to break the build when it detects vulnerabilities.
# In this case we want to upload the issues to GitHub Code Scanning
continue-on-error: true
uses: snyk/actions/docker@master
env:
# In order to use the Snyk Action you will need to have a Snyk API token.
# More details in https://github.com/snyk/actions#getting-your-snyk-token
# or you can signup for free at https://snyk.io/login
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
image: privatebin/nginx-fpm-alpine
args: --file=Dockerfile
- name: Upload result to GitHub Code Scanning
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: snyk.sarif

View file

@ -1,35 +0,0 @@
name: trivy-analysis
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '20 13 * * 3'
jobs:
build:
name: Trivy analysis
runs-on: "ubuntu-20.04"
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build an image from Dockerfile
run: |
docker build -t privatebin/nginx-fpm-alpine:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'privatebin/nginx-fpm-alpine:${{ github.sha }}'
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'

1
.gitignore vendored
View file

@ -1 +0,0 @@
Dockerfile.edge

View file

@ -1,104 +1,69 @@
FROM alpine:3.21
FROM alpine:3.11
ARG ALPINE_PACKAGES="php84-iconv php84-pdo_mysql php84-pdo_pgsql php84-openssl php84-simplexml"
ARG COMPOSER_PACKAGES="aws/aws-sdk-php google/cloud-storage"
ARG PBURL=https://github.com/PrivateBin/PrivateBin/
ARG RELEASE=1.7.6
ARG UID=65534
ARG GID=82
MAINTAINER PrivateBin <support@privatebin.org>
ENV CONFIG_PATH=/srv/cfg
ENV PATH=$PATH:/srv/bin
LABEL org.opencontainers.image.authors=support@privatebin.org \
org.opencontainers.image.vendor=PrivateBin \
org.opencontainers.image.documentation=https://github.com/PrivateBin/docker-nginx-fpm-alpine/blob/master/README.md \
org.opencontainers.image.source=https://github.com/PrivateBin/docker-nginx-fpm-alpine \
org.opencontainers.image.licenses=zlib-acknowledgement \
org.opencontainers.image.version=${RELEASE}
COPY release.asc /tmp/
ENV RELEASE 1.2.1
ENV PBURL https://github.com/PrivateBin/PrivateBin/
ENV S6RELEASE v1.22.1.0
ENV S6URL https://github.com/just-containers/s6-overlay/releases/download/
ENV S6_READ_ONLY_ROOT 1
RUN \
# Prepare composer dependencies
ALPINE_PACKAGES="$(echo ${ALPINE_PACKAGES} | sed 's/,/ /g')" ;\
ALPINE_COMPOSER_PACKAGES="" ;\
if [ -n "${COMPOSER_PACKAGES}" ] ; then \
# we need these PHP 8.3 packages until composer gets updated to depend on PHP 8.4
ALPINE_COMPOSER_PACKAGES="composer" ;\
if [ -n "${ALPINE_PACKAGES##*php83-curl*}" ] ; then \
ALPINE_COMPOSER_PACKAGES="php83-curl ${ALPINE_COMPOSER_PACKAGES}" ;\
fi ;\
if [ -n "${ALPINE_PACKAGES##*php83-mbstring*}" ] ; then \
ALPINE_COMPOSER_PACKAGES="php83-mbstring ${ALPINE_COMPOSER_PACKAGES}" ;\
fi ;\
if [ -z "${ALPINE_PACKAGES##*php84-simplexml*}" ] ; then \
ALPINE_COMPOSER_PACKAGES="php83-simplexml ${ALPINE_COMPOSER_PACKAGES}" ;\
fi ;\
fi \
# Install dependencies
&& apk upgrade --no-cache \
&& apk add --no-cache gnupg git nginx php84 php84-ctype php84-fpm php84-gd \
php84-opcache s6 tzdata ${ALPINE_PACKAGES} ${ALPINE_COMPOSER_PACKAGES} \
# Stabilize php config location
&& mv /etc/php84 /etc/php \
&& ln -s /etc/php /etc/php84 \
&& ln -s $(which php84) /usr/local/bin/php \
# Remove (some of the) default nginx & php config
&& rm -f /etc/nginx.conf /etc/nginx/http.d/default.conf /etc/php/php-fpm.d/www.conf \
apk add --no-cache gnupg libcap nginx php7-fpm php7-json php7-gd \
php7-opcache php7-pdo_mysql php7-pdo_pgsql tzdata \
# Remove (some of the) default nginx config
&& rm -f /etc/nginx.conf /etc/nginx/conf.d/default.conf /etc/php7/php-fpm.d/www.conf \
&& rm -rf /etc/nginx/sites-* \
# Ensure nginx logs, even if the config has errors, are written to stderr
&& ln -s /dev/stderr /var/log/nginx/error.log \
# Install PrivateBin
&& cd /tmp \
&& export GNUPGHOME="$(mktemp -d -p /tmp)" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg2 --list-public-keys || /bin/true \
&& gpg2 --import /tmp/release.asc \
&& wget -qO - https://privatebin.info/key/release.asc | gpg2 --import - \
&& rm -rf /var/www/* \
&& if expr "${RELEASE}" : '[0-9]\{1,\}\.[0-9]\{1,\}\.[0-9]\{1,\}$' >/dev/null ; then \
echo "getting release ${RELEASE}"; \
wget -qO ${RELEASE}.tar.gz.asc ${PBURL}releases/download/${RELEASE}/PrivateBin-${RELEASE}.tar.gz.asc \
&& wget -q ${PBURL}archive/${RELEASE}.tar.gz \
&& gpg2 --verify ${RELEASE}.tar.gz.asc ; \
else \
echo "getting tarball for ${RELEASE}"; \
git clone ${PBURL%%/}.git -b ${RELEASE}; \
(cd $(basename ${PBURL}) && git archive --prefix ${RELEASE}/ --format tgz ${RELEASE} > /tmp/${RELEASE}.tar.gz); \
fi \
&& cd /tmp \
&& wget -qO ${RELEASE}.tar.gz.asc ${PBURL}releases/download/${RELEASE}/PrivateBin-${RELEASE}.tar.gz.asc \
&& wget -q ${PBURL}archive/${RELEASE}.tar.gz \
&& gpg2 --verify ${RELEASE}.tar.gz.asc \
&& cd /var/www \
&& tar -xzf /tmp/${RELEASE}.tar.gz --strip 1 \
&& if [ -n "${COMPOSER_PACKAGES}" ] ; then \
composer remove --dev --no-update phpunit/phpunit \
&& composer config --unset platform \
&& composer require --no-update ${COMPOSER_PACKAGES} \
&& composer update --no-dev --optimize-autoloader \
rm /usr/local/bin/* ;\
fi \
&& rm *.md cfg/conf.sample.php \
&& mv bin cfg lib tpl vendor /srv \
&& mv cfg lib tpl vendor /srv \
&& mkdir -p /srv/data \
&& sed -i "s#define('PATH', '');#define('PATH', '/srv/');#" index.php \
# Install s6 overlay for service management
&& wget -qO - https://keybase.io/justcontainers/key.asc | gpg2 --import - \
&& cd /tmp \
&& S6ARCH=$(uname -m) \
&& case ${S6ARCH} in \
x86_64) S6ARCH=amd64;; \
armv7l) S6ARCH=armhf;; \
esac \
&& wget -q ${S6URL}${S6RELEASE}/s6-overlay-${S6ARCH}.tar.gz.sig \
&& wget -q ${S6URL}${S6RELEASE}/s6-overlay-${S6ARCH}.tar.gz \
&& gpg2 --verify s6-overlay-${S6ARCH}.tar.gz.sig \
&& tar -xzf s6-overlay-${S6ARCH}.tar.gz -C / \
# Support running s6 under a non-root user
&& mkdir -p /etc/s6/services/nginx/supervise /etc/s6/services/php-fpm84/supervise \
&& mkdir -p /etc/services.d/nginx/supervise /etc/services.d/php-fpm7/supervise \
&& mkfifo \
/etc/s6/services/nginx/supervise/control \
/etc/s6/services/php-fpm84/supervise/control \
&& chown -R ${UID}:${GID} /etc/s6 /run /srv/* /var/lib/nginx /var/www \
&& chmod o+rwx /run /var/lib/nginx /var/lib/nginx/tmp \
/etc/services.d/nginx/supervise/control \
/etc/services.d/php-fpm7/supervise/control \
/etc/s6/services/s6-fdholderd/supervise/control \
&& setcap 'cap_net_bind_service=+ep' /usr/sbin/nginx \
&& chown -R nobody.www-data /etc/services.d /etc/s6 /run /srv/* /var/lib/nginx /var/www \
# Clean up
&& gpgconf --kill gpg-agent \
&& rm -rf /tmp/* composer.* \
&& apk del --no-cache gnupg git ${ALPINE_COMPOSER_PACKAGES}
&& rm -rf "${GNUPGHOME}" /tmp/* \
&& apk del gnupg libcap
COPY etc/ /etc/
WORKDIR /var/www
# user nobody, group www-data
USER ${UID}:${GID}
USER nobody:www-data
# mark dirs as volumes that need to be writable, allows running the container --read-only
VOLUME /run /srv/data /srv/img /tmp /var/lib/nginx/tmp
VOLUME /run /srv/data /tmp /var/lib/nginx/tmp
EXPOSE 8080
EXPOSE 80 8080
ENTRYPOINT ["/etc/init.d/rc.local"]
ENTRYPOINT ["/init"]

View file

@ -1,7 +0,0 @@
# PrivateBin on Nginx, php-fpm & Alpine with file based storage backend
**PrivateBin** is a minimalist, open source online [pastebin](https://en.wikipedia.org/wiki/Pastebin) where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in [Galois Counter mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).
## Image variants
This is an image optimized for the file based storage backend. Please see the [generic image](https://hub.docker.com/r/privatebin/nginx-fpm-alpine) for details on how to use this image, other images variants and the different tags.

View file

@ -1,7 +0,0 @@
# PrivateBin on Nginx, php-fpm & Alpine with Google Cloud Storage backend
**PrivateBin** is a minimalist, open source online [pastebin](https://en.wikipedia.org/wiki/Pastebin) where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in [Galois Counter mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).
## Image variants
This is an image optimized for the Google Cloud Storage backend. Please see the [generic image](https://hub.docker.com/r/privatebin/nginx-fpm-alpine) for details on how to use this image, other images variants and the different tags.

218
README.md
View file

@ -1,103 +1,44 @@
# PrivateBin on Nginx, php-fpm & Alpine
# PrivateBin on nginx, php-fpm & alpine
**PrivateBin** is a minimalist, open source online [pastebin](https://en.wikipedia.org/wiki/Pastebin) where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in [Galois Counter mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).
This repository contains the Dockerfile and resources needed to create a docker image with a pre-installed PrivateBin instance in a secure default configuration. The images are based on the docker hub Alpine image, extended with the GD module required to generate discussion avatars and the Nginx webserver to serve static JavaScript libraries, CSS & the logos. All logs of php-fpm and Nginx (access & errors) are forwarded to docker logs.
## Image variants
This is the all-in-one image ([Docker Hub](https://hub.docker.com/r/privatebin/nginx-fpm-alpine/) / [GitHub](https://github.com/orgs/PrivateBin/packages/container/package/nginx-fpm-alpine)) that can be used with any storage backend supported by PrivateBin - file based storage, databases, Google Cloud or S3 Storage. We also offer dedicated images for each backend:
- [Image for file based storage (Docker Hub](https://hub.docker.com/r/privatebin/fs) / [GitHub](https://github.com/orgs/PrivateBin/packages/container/package/fs))
- [Image for PostgreSQL, MariaDB & MySQL (Docker Hub](https://hub.docker.com/r/privatebin/pdo) / [GitHub](https://github.com/orgs/PrivateBin/packages/container/package/pdo))
- [Image for Google Cloud Storage (Docker Hub](https://hub.docker.com/r/privatebin/gcs) / [GitHub](https://github.com/orgs/PrivateBin/packages/container/package/gcs))
- [Image for S3 Storage (Docker Hub](https://hub.docker.com/r/privatebin/s3) / [GitHub](https://github.com/orgs/PrivateBin/packages/container/package/s3))
## Image tags
All images contain a release version of PrivateBin and are offered with the following tags:
- `latest` is an alias of the latest pushed image, usually the same as `nightly`, but excluding `edge`
- `nightly` is the latest released PrivateBin version on an upgraded Alpine release image, including the latest changes from the docker image repository
- `edge` is the latest released PrivateBin version on an upgraded Alpine edge image
- `stable` contains the latest PrivateBin release on the latest tagged release of the [docker image git repository](https://github.com/PrivateBin/docker-nginx-fpm-alpine) - gets updated when important security fixes are released for Alpine or upon new Alpine releases
- `1.5.1` contains PrivateBin version 1.5.1 on the latest tagged release of the [docker image git repository](https://github.com/PrivateBin/docker-nginx-fpm-alpine) - gets updated when important security fixes are released for Alpine or upon new Alpine releases, same as stable
- `1.5.1-...` are provided for selecting specific, immutable images
If you update your images automatically via pulls, the `stable`, `nightly` or `latest` are recommended. If you prefer to have control and reproducability or use a form of orchestration, the numeric tags are probably preferable. The `edge` tag offers a preview of software in future Alpine releases and serves as an early warning system to detect image build issues in these.
## Image registries
These images are hosted on the Docker Hub and the GitHub container registries:
- [Images on Docker Hub](https://hub.docker.com/u/privatebin), which are prefixed `privatebin` or `docker.io/privatebin`
- [Images on GitHub](https://github.com/orgs/PrivateBin/packages), which are prefixed `ghcr.io/privatebin`
This repository contains the Dockerfile and resources needed to create a docker image with a pre-installed PrivateBin instance in a secure default configuration. The images are based on the docker hub alpine image, extended with the GD module required to generate discussion avatars and the Nginx webserver to serve static JavaScript libraries, CSS & the logos. All logs of php-fpm and Nginx (access & errors) are forwarded to docker logs.
## Running the image
Assuming you have docker successfully installed and internet access, you can fetch and run the image from the docker hub like this:
```console
$ docker run -d --restart="always" --read-only -p 8080:8080 -v $PWD/privatebin-data:/srv/data privatebin/nginx-fpm-alpine
```bash
docker run -d --restart="always" --read-only -p 8080:8080 -v privatebin-data:/srv/data privatebin/nginx-fpm-alpine
```
The parameters in detail:
- `-v $PWD/privatebin-data:/srv/data` - replace `$PWD/privatebin-data` with the path to the folder on your system, where the pastes and other service data should be persisted. This guarantees that your pastes aren't lost after you stop and restart the image or when you replace it. May be skipped if you just want to test the image or use database or Google Cloud Storage backend.
- `-v privatebin-data:/srv/data` - replace `privatebin-data` with the path to the folder on your system, where the pastes and other service data should be persisted. This guarantees that your pastes aren't lost after you stop and restart the image or when you replace it. May be skipped if you just want to test the image.
- `-p 8080:8080` - The Nginx webserver inside the container listens on port 8080, this parameter exposes it on your system on port 8080. Be sure to use a reverse proxy for HTTPS termination in front of it in production environments.
- `--read-only` - This image supports running in read-only mode. Using this reduces the attack surface slightly, since an exploit in one of the images services can't overwrite arbitrary files in the container. Only /tmp, /var/tmp, /var/run & /srv/data may be written into.
- `-d` - launches the container in the background. You can use `docker ps` and `docker logs` to check if the container is alive and well.
- `--restart="always"` - restart the container if it crashes, mainly useful for production setups
> Note that the volume mounted must be owned by UID 65534 / GID 82. If you run the container in a docker instance with "userns-remap" you need to add your subuid/subgid range to these numbers.
>
> Note, too, that this image exposes the same service on port 80, for backwards compatibility with older versions of the image. To use port 80 with the current image, you either need to have a filesystem with extended attribute support so the nginx binary can be granted the capability to bind to ports below 1024 as non-root user or you need to start the image with user id 0 (root) using the parameter `-u 0`.
### Custom configuration
In case you want to use a customized [conf.php](https://github.com/PrivateBin/PrivateBin/blob/master/cfg/conf.sample.php) file, for example one that has file uploads enabled or that uses a different template, add the file as a second volume:
```console
$ docker run -d --restart="always" --read-only -p 8080:8080 -v $PWD/conf.php:/srv/cfg/conf.php:ro -v $PWD/privatebin-data:/srv/data privatebin/nginx-fpm-alpine
```bash
docker run -d --restart="always" --read-only -p 8080:8080 -v conf.php:/srv/cfg/conf.php:ro -v privatebin-data:/srv/data privatebin/nginx-fpm-alpine
```
Note: The `Filesystem` data storage is supported out of the box. The image includes PDO modules for MySQL and PostgreSQL, required for the `Database` one, but you still need to keep the /srv/data persisted for the server salt and the traffic limiter when using a release before 1.4.0.
Note: The `Filesystem` data storage is supported out of the box. The image includes PDO modules for MySQL, PostgreSQL and SQLite, required for the `Database` one, but you still need to keep the /srv/data persisted for the server salt and the traffic limiter.
#### Environment variables
### Adjusting nginx or php-fpm settings
The following variables do get passed down to the PHP application to support various scenarios. This allows changing some settings via the environment instead of a configuration file. Most of these relate to the storage backends:
You can attach your own `php.ini` or nginx configuration files to the folders `/etc/php7/conf.d/` and `/etc/nginx/conf.d/` respectively. This would for example let you adjust the maximum size these two services accept for file uploads, if you need more then the default 10 MiB.
##### Amazon Web Services variables used by the S3 backend
- `AWS_ACCESS_KEY_ID`
- `AWS_CONTAINER_AUTHORIZATION_TOKEN`
- `AWS_CONTAINER_CREDENTIALS_FULL_URI`
- `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
- `AWS_DEFAULT_REGION`
- `AWS_PROFILE`
- `AWS_ROLE_ARN`
- `AWS_ROLE_SESSION_NAME`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_SESSION_TOKEN`
- `AWS_STS_REGIONAL_ENDPOINTS`
- `AWS_WEB_IDENTITY_TOKEN_FILE`
- `AWS_SHARED_CREDENTIALS_FILE`
##### Google Cloud variables used by the GCS backend
- `GCLOUD_PROJECT`
- `GOOGLE_APPLICATION_CREDENTIALS`
- `GOOGLE_CLOUD_PROJECT`
- `PRIVATEBIN_GCS_BUCKET`
##### Custom backend settings
The following variables are not used by default, but can be [enabled in your custom configuration file](https://github.com/PrivateBin/docker-nginx-fpm-alpine/issues/196#issuecomment-2163331528), to keep sensitive information out of it:
- `STORAGE_HOST`
- `STORAGE_LOGIN`
- `STORAGE_PASSWORD`
- `STORAGE_CONTAINER`
##### Configuration folder
- `CONFIG_PATH`
##### Timezone settings
### Timezone settings
The image supports the use of the following two environment variables to adjust the timezone. This is most useful to ensure the logs show the correct local time.
@ -106,144 +47,21 @@ The image supports the use of the following two environment variables to adjust
Note: The application internally handles expiration of pastes based on a UNIX timestamp that is calculated based on the timezone set during its creation. Changing the PHP_TZ will affect this and leads to earlier (if the timezone is increased) or later (if it is decreased) expiration then expected.
### Adjusting nginx or php-fpm settings
You can attach your own `php.ini` or nginx configuration files to the folders `/etc/php/conf.d/` and `/etc/nginx/http.d/` respectively. This would for example let you adjust the maximum size these two services accept for file uploads, if you need more then the default 10 MiB.
### Kubernetes deployment
Below is an example deployment for Kubernetes.
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: privatebin-deployment
labels:
app: privatebin
spec:
replicas: 3
selector:
matchLabels:
app: privatebin
template:
metadata:
labels:
app: privatebin
spec:
securityContext:
runAsUser: 65534
runAsGroup: 82
fsGroup: 82
containers:
- name: privatebin
image: privatebin/nginx-fpm-alpine:stable
ports:
- containerPort: 8080
env:
- name: TZ
value: Antarctica/South_Pole
- name: PHP_TZ
value: Antarctica/South_Pole
securityContext:
readOnlyRootFilesystem: true
privileged: false
allowPrivilegeEscalation: false
livenessProbe:
httpGet:
path: /
port: 8080
readinessProbe:
httpGet:
path: /
port: 8080
volumeMounts:
- mountPath: /srv/data
name: privatebin-data
readOnly: False
- mountPath: /run
name: run
readOnly: False
- mountPath: /tmp
name: tmp
readOnly: False
- mountPath: /var/lib/nginx/tmp
name: nginx-cache
readOnly: False
volumes:
- name: run
emptyDir:
medium: "Memory"
- name: tmp
emptyDir:
medium: "Memory"
- name: nginx-cache
emptyDir: {}
```
Note that the volume `privatebin-data` has to be a shared, persisted volume across all nodes, i.e. on an NFS share. As of PrivateBin 1.4.0 it is no longer required, when using a database or Google Cloud Storage.
## Running administrative scripts
The image includes two administrative scripts, which you can use to migrate from one storage backend to another, delete pastes by ID, removing empty directories when using the Filesystem backend, to purge all expired pastes and display statistics. These can be executed within the running image or by running the commands as alternative entrypoints with the same volumes attached as in the running service image, the former option is recommended.
```console
# assuming you named your container "privatebin" using the option: --name privatebin
$ docker exec -t privatebin administration --help
Usage:
administration [--delete <paste id> | --empty-dirs | --help | --purge | --statistics]
Options:
-d, --delete deletes the requested paste ID
-e, --empty-dirs removes empty directories (only if Filesystem storage is
configured)
-h, --help displays this help message
-p, --purge purge all expired pastes
-s, --statistics reads all stored pastes and comments and reports statistics
$ docker exec -t privatebin migrate --help
migrate - Copy data between PrivateBin backends
Usage:
migrate [--delete-after] [--delete-during] [-f] [-n] [-v] srcconfdir
[<dstconfdir>]
migrate [-h|--help]
Options:
--delete-after delete data from source after all pastes and comments have
successfully been copied to the destination
--delete-during delete data from source after the current paste and its
comments have successfully been copied to the destination
-f forcefully overwrite data which already exists at the
destination
-h, --help displays this help message
-n dry run, do not copy data
-v be verbose
<srcconfdir> use storage backend configuration from conf.php found in
this directory as source
<dstconfdir> optionally, use storage backend configuration from conf.php
found in this directory as destination; defaults to:
/srv/bin/../cfg/conf.php
```
Note that in order to migrate between different storage backends you will need to use the all-in-one image called `privatebin/nginx-fpm-alpine`, as it comes with all the drivers and libraries for the different supported backends. When using the variant images, you will only be able to migrate within two backends of the same storage type, for example two filesystem paths or two database backends.
## Rolling your own image
To reproduce the image, run:
```console
$ docker build -t privatebin/nginx-fpm-alpine .
```bash
docker build -t privatebin/nginx-fpm-alpine .
```
### Behind the scenes
The two processes, Nginx and php-fpm, are started by s6.
The two processes, Nginx and php-fpm, are started by s6 overlay.
Nginx is required to serve static files and caches them, too. Requests to the index.php (which is the only PHP file exposed in the document root at /var/www) are passed to php-fpm via a socket at /run/php-fpm.sock. All other PHP files and the data are stored under /srv.
The Nginx setup supports only HTTP, so make sure that you run a reverse proxy in front of this for HTTPS offloading and reducing the attack surface on your TLS stack. The Nginx in this image is set up to deflate/gzip text content.
During the build of the image, the PrivateBin release archive is downloaded from Github. All the downloaded Alpine packages and the PrivateBin archive are validated using cryptographic signatures to ensure they have not been tempered with, before deploying them in the image.
During the build of the image the PrivateBin release archive and the s6 overlay binaries are downloaded from Github. All the downloaded Alpine packages, s6 overlay binaries and the PrivateBin archive are validated using cryptographic signatures to ensure they have not been tempered with, before deploying them in the image.

View file

@ -1,7 +0,0 @@
# PrivateBin on Nginx, php-fpm & Alpine with PostgreSQL, MariaDB & MySQL backend
**PrivateBin** is a minimalist, open source online [pastebin](https://en.wikipedia.org/wiki/Pastebin) where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in [Galois Counter mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).
## Image variants
This is an image optimized for PostgreSQL, MariaDB & MySQL storage backends. Please see the [generic image](https://hub.docker.com/r/privatebin/nginx-fpm-alpine) for details on how to use this image, other images variants and the different tags.

View file

@ -1,7 +0,0 @@
# PrivateBin on Nginx, php-fpm & Alpine with S3 Storage backend
**PrivateBin** is a minimalist, open source online [pastebin](https://en.wikipedia.org/wiki/Pastebin) where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in [Galois Counter mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).
## Image variants
This is an image optimized for the S3 Storage backend. Please see the [generic image](https://hub.docker.com/r/privatebin/nginx-fpm-alpine) for details on how to use this image, other images variants and the different tags.

105
buildx.sh
View file

@ -1,105 +0,0 @@
#!/bin/bash
# exit immediately on non-zero return code, including during a pipe stage or on
# accessing an uninitialized variable and print commands before executing them
set -euxo pipefail
EVENT="$1"
IMAGE="$2"
EDGE=false
[ "$3" = edge ] && EDGE=true
build_image() {
# shellcheck disable=SC2068
docker build \
--pull \
--no-cache \
--load \
$@ \
.
}
push_image() {
# shellcheck disable=SC2068
docker buildx build \
--platform linux/amd64,linux/386,linux/arm/v6,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x \
--pull \
--no-cache \
--push \
--provenance=false \
$@ \
.
}
is_image_push_required() {
[ "${EVENT}" != pull_request ] && { \
[ "${GITHUB_REF}" != refs/heads/master ] || \
[ "${EVENT}" = schedule ]
}
}
main() {
local TAG BUILD_ARGS IMAGE_TAGS
if [ "${EVENT}" = schedule ] ; then
TAG=nightly
else
TAG=${GITHUB_REF##*/}
fi
case "${IMAGE}" in
fs)
BUILD_ARGS="--build-arg ALPINE_PACKAGES= --build-arg COMPOSER_PACKAGES="
;;
gcs)
BUILD_ARGS="--build-arg ALPINE_PACKAGES=php84-openssl --build-arg COMPOSER_PACKAGES=google/cloud-storage"
;;
pdo)
BUILD_ARGS="--build-arg ALPINE_PACKAGES=php84-pdo_mysql,php84-pdo_pgsql --build-arg COMPOSER_PACKAGES="
;;
s3)
BUILD_ARGS="--build-arg ALPINE_PACKAGES=php84-curl,php84-mbstring,php84-openssl,php84-simplexml --build-arg COMPOSER_PACKAGES=aws/aws-sdk-php"
;;
*)
BUILD_ARGS=""
;;
esac
IMAGE="privatebin/${IMAGE}"
IMAGE_TAGS="--tag ${IMAGE}:latest --tag ${IMAGE}:${TAG} --tag ${IMAGE}:${TAG%%-*} --tag ghcr.io/${IMAGE}:latest --tag ghcr.io/${IMAGE}:${TAG} --tag ghcr.io/${IMAGE}:${TAG%%-*}"
if [ "${EDGE}" = true ] ; then
# build from alpine:edge instead of the stable release
sed -e 's/^FROM alpine:.*$/FROM alpine:edge/' Dockerfile > Dockerfile.edge
BUILD_ARGS+=" -f Dockerfile.edge"
# replace the default tags, build just the edge one
IMAGE_TAGS="--tag ${IMAGE}:edge --tag ghcr.io/${IMAGE}:edge"
IMAGE+=":edge"
else
if [ "${EVENT}" = push ] ; then
# append the stable tag on explicit pushes to master or (git) tags
IMAGE_TAGS+=" --tag ${IMAGE}:stable --tag ghcr.io/${IMAGE}:stable"
fi
# always build latest on non-edge builds
IMAGE+=":latest"
fi
build_image "${BUILD_ARGS} ${IMAGE_TAGS}"
docker run -d --rm -p 127.0.0.1:8080:8080 --read-only --name smoketest "${IMAGE}"
sleep 5 # give the services time to start up and the log to collect any errors that might occur
test "$(docker inspect --format="{{.State.Running}}" smoketest)" = true
curl --silent --show-error -o /dev/null http://127.0.0.1:8080/
if docker logs smoketest 2>&1 | grep -i -E "warn|emerg|fatal|panic|error"
then
exit 1
fi
docker stop smoketest
if is_image_push_required ; then
push_image "${BUILD_ARGS} ${IMAGE_TAGS}"
fi
rm -f Dockerfile.edge "${HOME}/.docker/config.json"
}
[ "$(basename "$0")" = 'buildx.sh' ] && main

View file

@ -1,3 +0,0 @@
#!/bin/execlineb -P
foreground { cp -r /etc/s6/services /run }
s6-svscan /run/services

View file

@ -1,44 +0,0 @@
server {
listen 8080 default_server;
listen [::]:8080 default_server;
root /var/www;
index index.php index.html index.htm;
location / {
# no-transform tells Cloudflare and others to not change the content of
# the file and thus breaking SRI.
# https://developers.cloudflare.com/cache/about/cache-control#other
add_header Cache-Control "public, max-age=3600, must-revalidate, no-transform";
add_header Cross-Origin-Embedder-Policy require-corp;
# disabled, because it prevents links from a paste to the same site to
# be opened. Didn't work with `same-origin-allow-popups` either.
# See issue #109 for details.
#add_header Cross-Origin-Opener-Policy same-origin;
add_header Cross-Origin-Resource-Policy same-origin;
add_header Referrer-Policy no-referrer;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options deny;
add_header X-XSS-Protection "1; mode=block";
# Uncomment to enable HSTS
# https://www.nginx.com/blog/http-strict-transport-security-hsts-and-nginx/
#add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
include /etc/nginx/location.d/*.conf;
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include /etc/nginx/location.d/*.conf;
fastcgi_pass unix:/run/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# Prevent exposing nginx + version to $_SERVER
fastcgi_param SERVER_SOFTWARE "";
}
include /etc/nginx/server.d/*.conf;
}

View file

@ -1,3 +1,6 @@
# Run as a unique, less privileged user for security reasons.
user nobody www-data;
# Sets the worker threads to the number of CPU cores available in the system for best performance.
# Should be > the number of CPU cores.
# Maximum number of connections = worker_processes * worker_connections
@ -67,5 +70,6 @@ http {
client_max_body_size 15M;
# Load even moar configs
include /etc/nginx/http.d/*.conf;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}

View file

@ -0,0 +1,25 @@
server {
listen 80 default_server;
listen 8080 default_server;
root /var/www;
index index.php index.html index.htm;
location / {
include /etc/nginx/location.d/*.conf;
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include /etc/nginx/location.d/*.conf;
fastcgi_pass unix:/run/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# Prevent exposing nginx + version to $_SERVER
fastcgi_param SERVER_SOFTWARE "";
}
include /etc/nginx/server.d/*.conf;
}

View file

@ -0,0 +1 @@
/etc/nginx/sites-available/site.conf

View file

@ -1,43 +0,0 @@
[global]
daemonize = no
error_log = /dev/stderr
[www]
listen = /run/php-fpm.sock
access.log = /dev/null
clear_env = On
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
; Amazon Web Services variables used with S3 backend
env[AWS_ACCESS_KEY_ID] = $AWS_ACCESS_KEY_ID
env[AWS_CONTAINER_AUTHORIZATION_TOKEN] = $AWS_CONTAINER_AUTHORIZATION_TOKEN
env[AWS_CONTAINER_CREDENTIALS_FULL_URI] = $AWS_CONTAINER_CREDENTIALS_FULL_URI
env[AWS_CONTAINER_CREDENTIALS_RELATIVE_URI] = $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
env[AWS_DEFAULT_REGION] = $AWS_DEFAULT_REGION
env[AWS_PROFILE] = $AWS_PROFILE
env[AWS_ROLE_ARN] = $AWS_ROLE_ARN
env[AWS_ROLE_SESSION_NAME] = $AWS_ROLE_SESSION_NAME
env[AWS_SECRET_ACCESS_KEY] = $AWS_SECRET_ACCESS_KEY
env[AWS_SESSION_TOKEN] = $AWS_SESSION_TOKEN
env[AWS_STS_REGIONAL_ENDPOINTS] = $AWS_STS_REGIONAL_ENDPOINTS
env[AWS_WEB_IDENTITY_TOKEN_FILE] = $AWS_WEB_IDENTITY_TOKEN_FILE
env[AWS_SHARED_CREDENTIALS_FILE] = $AWS_SHARED_CREDENTIALS_FILE
; allows changing the default configuration path
env[CONFIG_PATH] = $CONFIG_PATH
; Google Cloud variables used with GCS backend
env[GCLOUD_PROJECT] = $GCLOUD_PROJECT
env[GOOGLE_APPLICATION_CREDENTIALS] = $GOOGLE_APPLICATION_CREDENTIALS
env[GOOGLE_CLOUD_PROJECT] = $GOOGLE_CLOUD_PROJECT
env[PRIVATEBIN_GCS_BUCKET] = $PRIVATEBIN_GCS_BUCKET
; allow using custom backend settings
env[STORAGE_HOST] = $STORAGE_HOST
env[STORAGE_LOGIN] = $STORAGE_LOGIN
env[STORAGE_PASSWORD] = $STORAGE_PASSWORD
env[STORAGE_CONTAINER] = $STORAGE_CONTAINER

View file

@ -4,6 +4,9 @@
; fixation via session adoption with strict mode. Defaults to 0 (disabled).
session.use_strict_mode=On
; Enable assert() evaluation.
assert.active=Off
; This determines whether errors should be printed to the screen as part of the output or if they
; should be hidden from the user. Value "stderr" sends the errors to stderr instead of stdout.
display_errors=Off

View file

@ -0,0 +1,18 @@
[global]
pid = /run/php-fpm7.pid
daemonize = no
error_log = /dev/stderr
[www]
user = nobody
group = www-data
listen = /run/php-fpm.sock
listen.owner = nobody
listen.group = www-data
access.log = /dev/null
clear_env = On
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

View file

@ -1,9 +0,0 @@
#!/bin/execlineb -P
forx -o 127 timer { 0 1 2 3 4 5 6 7 8 9 }
ifelse {
test -S /var/run/php-fpm.sock
} {
/usr/sbin/nginx
}
foreground { sleep 1 }
exit 127

View file

@ -1,2 +0,0 @@
#!/bin/execlineb -P
/usr/sbin/php-fpm84

2
etc/services.d/nginx/run Normal file
View file

@ -0,0 +1,2 @@
#!/usr/bin/execlineb -P
/usr/sbin/nginx

View file

@ -0,0 +1,2 @@
#!/usr/bin/execlineb -P
/usr/sbin/php-fpm7

View file

@ -1,41 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBFtqB6oBEADM2ydU1BWvAVGbAj6Q8eLEbiXcHAAGdYu6DgQVQo0tsUejbBOj
4YCAjwl8vShGMUlJoXGvR+WOrkB9OHWpl9uI+hS0R6RX7PxF4GpNtO7cnQcAUHc9
WauAgAfu+3n8t9FRDIzT2lPuSjLFEvmVixfNa41nMG5Zuzf45mJlvTe3CwY85eBW
uWzTKDhCcv+ETdsQGsSCVRqyPztNL9eE6JaNGpxhmGqe9M09DxC73wR/va8zGjrD
uIsxhKRPb7XQaH0nI+s8r+EsWezZD2UNL7Zp3ID6KSVVcYbqXLY/cz4eEVg4hnIJ
WP8OIMPftqXJRt71F13GrtzKE8YXhEo7IE2WcbLCICzD1ZAj1sJizEaXKdSKuAK8
AL9d9K2PhnzaprKKjLd9TYdeqL9bsRW/il6OrCWvSzevbkX1Z/xU3eeQF4OUrqKe
JddwDqULm7UkL5niFXEFtMXSVLV9ppVU4s2jX7pz+JGyce1S9Nbf+kGT7Ks69XS7
+ves1uynu8UtBQjv0Xc+NFuzYmthfvB7zvpMTk6nNtN3PpxX8NvcJffR5zNkr15a
2I2+IELhYzGmp1xZeW1kykARX4M4ZD9GW/tAA+5zDoWGdmQv3Zw1ifLieyfhMmKh
5wrQQ6wlM5MCoj1YY4WvnyD68ac/3WTCPKmDtNcXhj3E3tfhDvFKKPXTZQARAQAB
tDVQcml2YXRlQmluIHJlbGVhc2UgKHNvbGVseSB1c2VkIGZvciBzaWduaW5nIHJl
bGVhc2VzKYkCOAQTAQIAIgIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAltq
ev0ACgkQ4Rt5UOnhg9u9RQ/+O1R+dMqCTr8iaW7lkdiEEXohy5efUjlqexL+4WRF
/97X4WCEOYNPsUkcOerTCQ8SZlMBLRTQmrTVaKn3Yd0fonMFeDGNyGbuCvWZr96r
6hPbEkXCbD9rBAvhbcAYTCsQyYNtq9DY26PJHd5nOBVoDb4sWkjn5i+/ECKGAjDq
yuvNbB+kjiyBeg8ph6o60m78RL/4wWoGGwBuijaNu1m4WFqneRNz0V+GfNjsPwoh
+XSNBj1C3+Ys9qn3EakLfmmKFkNOBIGyiaWam9c+Anj8w0eC5GlFslTVlx7OpEse
zTP9dCsFoDfpxQT6Fnx128uBGrry5s40fy2hjc3t+RAWWfgkRNupCD7eXeD71EtL
cCvkNPrcCBYAf/0YgbZGmh5HzHnBjPxNYuG/sjIDBPiXWqp0a4legbkWj7WADGv2
AhspjkUzjnq7yEEu93LnXDvC9nxkUwU5uLWiWkzC4T1+fid5w5+gwNGt7BuG1nzo
ok8SjdHUa3h1N6U9/BLExgM8ptmqxkdT6sAhfPmRKTh483aF4NQChNFBBEUUWR8z
HhAC4GUjhMODtqx1o9+HjPBHtt4tiiPwzcR6zef4nKyi0Y3jrfoGnjWQ1z5JrGqT
mySsBdQ1rmA3N7T3LlDDAr/V6Gvu3aX4PIwNEoxq9gP+XWGpQkd0AxKn2Dvob3jG
iaiJAhwEEAECAAYFAltqfCkACgkQD1yUCmvYH5KxhQ/+O0QmI0HBq404A5Q50P0g
r/f6K33SNuBrC+qrmcshCNGC8AW6dryDvY2+caJx4oeoV4ToBoECPgWwHvUJgF5d
UWBI8gh1Wxs3XSrf++9kmIfoezH8RHsXa03QGCU7AS+0M3zsQjk4dBwRXfwf8/PJ
5tMkkou1sSfFHuAQdjVMzC32Qdru0jaK/HU/Gx/oPoL4obCfniAc8koKDXLHbIYI
FKc3V1jpNShE++yvv3TEr4GDI+DkAkkH1d15pETd331GXrN8djorLMKooS7eJWiU
WMetHUfPAALQImo1wAROQUf5O/2yN+t5HsId9RPQopUtYGf1BIUelSoKU9bn0pMh
yonKR8zCx7VWnIXxoR6fkhGoW/v5XyomthjYKom8Ok1HLPM8/mCRyPSNq3cp/B7n
fcdMkTKc9h9Tv004PA4BjNtVycK34Gj2GsjSzr5THiagsNzvIIfmDKjTirOdXtRA
5/oRKgTNEAz/qT9Y3EfmFni4cq3JBU7sQq6BEy48J4HSrjEVlzwNbXFeT/LTDNKX
wj1GgSM3vK3y1Pt1fH9aKTRL1Awahtn4+LTUrnm7Iq+Kq74n7MtA6WQNy+RjS4DO
9qfhtuJa4Pa4Y/KFc4JoBcsvI7B1PYE2xRRYLBQJak31PIK0+/7gn6mGpMkLRSO8
PIP40VAkZWr13GxsW+c+2fE=
=zwiR
-----END PGP PUBLIC KEY BLOCK-----