SeaFile is an open source file sharing platform like Dropbox. Unlike Dropbox, SeaFile can be hosted on a client machine which gives the user full control of the cloud. It also provides better security as it omits the presence of a third party cloud hosting platform.

SeaFile includes a number of features such as client-side encryption, public uploading, downloading, link sharing, antivirus integrations etc. You can read more about SeaFile here.

In this tutorial, we will be covering how to install SeaFile on Ubuntu 16.04. Just follow these steps and you’ll be good to go.

Prerequisite

  • Putty with sudo user privileges
  • Ubuntu 16.04 installed machine
  • LAMP stack installed – Refer the tutorial How to install LAMP stack on Ubuntu to install LAMP stack on the server.

Please note that this tutorial will be demonstrating through a dummy domain name or IP address.  I have decided to keep the SeaFile files on the directory www.example1.com/seafile and the seahub to run on my demo server IP address.

Before you start with installation, please make sure that the system repositories are up to date by releasing the following command.

sudo apt-get update

Installing Dependencies

The sea file installation requires the following packages to be installed on the server. If you don’t install these packages, the seafile installation will be failed

  • python 2.7
  • python-setuptools
  • python-imaging
  • python-ldap
  • python-mysqldb
  • python-urllib3
  • python-memcache (or python-memcached)

Please use the following commands to install the dependency package on Ubuntu 16.04

sudo apt-get install python

sudo apt-get install python2.7 libpython2.7 python-setuptools python-imaging python-ldap python-mysqldb python-memcache python-urllib3

installing dependencies for SeaFile

Download Latest SeaFile Package

In this step, we will download the latest SeaFile package into our server. You can find the latest version on the official website download page. The latest version of the SeaFile is 6.0.9 by the time of writing this article and I have used it throughout this article.

Download seafile to server

Before downloading the SeaFile package, please make sure that you are in the correct directory where you want to keep the SeaFile files. Then use the following command to download the SeaFile file

wget https://bintray.com/artifact/download/seafile-org/seafile/seafile-server_6.0.9_x86-64.tar.gz

After the download complete, just use the following command to extract the SeaFile tar package on the current directory.

tar -xzf seafile-server_6.0.9_x86-64.tar.gz

Now a folder named seafile-server-6.0.9 will be created on the server with all the seafile files

Navigate to the seafile-server directory by using the following command

cd seafile-server-6.0.9

Installing Seafile on Server

There is an automatic installation script available with the SeaFile package that will simplify the SeaFile installation on the server.

The installation script can even setup databases and database users on the go if you are able to provide the MySQL root password. Else, you can create the databases and users manually and can provide the details to the installation script while it asking for the details.

I will be using the automatic creation of the databases during the installation by choosing the option 1 while the installation starts. If you have already created the databases and user for the databases, just choose the option 2 while the installation script asks for the option

Just run the setup-seafile-mysql.sh script from the seafile-server directory to begin the seafile setup

sudo ./setup-seafile-mysql.sh

Running this script will initialize the SeaFile installation on the server. Read the instructions carefully and fill the details. You can see the data used for my installation in the below script

This script will guide you to setup your SeaFile server using MySQL.
Make sure you have read SeaFile server manual at

https://github.com/haiwen/seafile/wiki

Press ENTER to continue
—————————————————————–
What is the name of the server? It will be displayed on the client.
3 – 15 letters or digits
[ server name ] LEBDemo // Type a name for Seafile Server

What is the ip or domain of the server?
For example: www.mycompany.com, 192.168.1.101
[ This server’s ip or domain ] 104.168.101.139 // I have used IP address. You can use even your domain name instead

Where do you want to put your seafile data?
Please use a volume with enough free space
[ default “/var/www/example1.com/public_html/seafile/seafile-data” ] // I have left it as default. Better to choose a directory that is different from the public directory for better security

Which port do you want to use for the seafile fileserver?
[ default “8082” ] // I have used the default port. Better to use different port

——————————————————-
Please choose a way to initialize seafile databases:
——————————————————-

[1] Create new ccnet/seafile/seahub databases
[2] Use existing ccnet/seafile/seahub databases

[ 1 or 2 ] 1 // I have choose the option to create new databases. This script will automatically create the databases for seafile. If you have created the databases already, just choose the option 2 and enter the databases details when it asks.

What is the host of mysql server?
[ default “localhost” ] // By default it is localhost

What is the port of mysql server?
[ default “3306” ] // By default it is 3306. Just press enter if you don’t have a different port number

What is the password of the mysql root user?
[ root password ] // You must provide MySQL root password for the script to create databases.

verifying password of user root … done

Enter the name for mysql user of seafile. It would be created if not exists.
[ default “seafile” ] seafileuser // Create a database user for seafile.

Enter the password for mysql user “seafileuser”:
[ password for seafileuser ] // type a password for the user created above

verifying password of user seafileuser … done

Enter the database name for ccnet-server:
[ default “ccnet-db” ] sfccnet // type a database name for ccnet-server

Enter the database name for seafile-server:
[ default “seafile-db” ] sfserver//type a database name for seafile server

Enter the database name for seahub:
[ default “seahub-db” ] sfhub //type a database name for seafhub

———————————
This is your configuration
———————————

server name: LEBDemo
server ip/domain: 104.168.101.139

seafile data dir: /var/www/example1.com/public_html/seafile/seafile-data
fileserver port: 8082

database: create new
ccnet database: sfccnet
seafile database: sfserver
seahub database: sfhub
database user: seafileuser


Please hit enter when it asks to press Enter to continue. Upon hitting enter, the script will start configuring the seafile on the server.

seafile installation completed

After the installation complete, navigate to seafile-server* directory using the following command

cd seafile-server-6.0.9

Starting the Server

Once you are on the seafile-server directory, we will run a script to initialize the server. Use the following command to initialize the seafile service

sudo ./seafile.sh start

starting seafile server

Now, start seahub website using the following command

sudo  ./seahub.sh start 8001

Please note that I have used the port 8001. If you leave it empty the script will take the default port as 8000.

As it is the first time run of the server, it will ask you to configure the admin account for seahub. Just fill out the details to continue

starting seahub file configuring admin details

Congratulations! You have successfully installed seafile on the server.

Now you can access the seahub over the URL domain.com:port or ipaddress:port

ie, I can access the seahub server on http://104.168.101.139:8001 as I have used the IP 104.168.101.139 and port 8001.

seahub portalThe SeaFile SeaHub dashboard will look like below

the seafile dashboard

You can configure the desktop client by downloading from the official website download page and can manage the files and libraries from your local machine

 

*Source: HERE

I’m going to show you how you can set up Docker on your Ubuntu PC and start using it for WordPress development. Along the way, I’ll show you why you would want to do that. Then I’ll dive into the details and share step-by-step instructions.

So, why bother using Docker? After all, you could get by with a text editor and an FTP client.

That’s the approach I used when I first got into WordPress development. Back then, I would edit files on my own computer and upload them to the server. I’d do all my testing on the remote machine, on a separate WordPress installation.

While it worked, it was pretty uncomfortable. I soon got frustrated with the delays every time I wanted to test a small code change. So I was happy when I learned about WAMP.

Now I could run my LAMP stack on my Windows box, build the entire project, and then deploy it when it was done. Even then, there were some difficulties I had to work around.

For one thing, Windows and Linux have some subtle difficulties that tripped me up a few times. One obvious issue is the way they deal with file paths.

There are some features in PHP that address these differences, but it was still a pain. So my next step was to install Ubuntu on my PC. Now I could develop in a native Linux environment. In fact, I could set it up so it was virtually identical to my web server,

This was great for my own projects, but I ran into major difficulties the first time I worked for a client.

Dependency Hell

The client in question was running an old version of PHP on their server. In fact, they had old versions of everything – MySQL, Apache, Linux, the works.

Unfortunately, they couldn’t update their server software because that would break all their code. Suddenly I found myself in a situation where I couldn’t code on my Linux box and upload the code when I was done. All because the packages on my PC were up to date.

I’d just taken my first bold step into the nightmare that developers affectionately call “dependency hell”. It took me several days to come up with a workaround.

Virtualization

After some frenzied research, I realized I would have to use a combination of Virtualbox and Vagrant to recreate the conditions on the server. Virtualbox is a free tool from Oracle that allows you to run virtual machines on your computer.

This allowed me to run a separate instance of Linux alongside my native system. I could set it up just right, duplicating all the eccentricities of my client’s out of date server.

Vagrant is a great tool that simplifies the process of running multiple VM versions.

Even then, it wasn’t easy. The version of PHP on the client’s box was so long in the tooth that it wasn’t available in any of the repositories. Instead, I had to track down the old source code and compile it from scratch. And that meant tracking down outdated versions of every library used by PHP. All-in-all, I spent 5 days setting up the environment so I could reach square zero.

Nonetheless, once I finally got things set up, the rest of the job was smooth sailing. And from that point on, Vagrant and Virtualbox became my indispensable web dev buddies.

That is until Docker appeared on the scene.

Why Docker?

So, why should you use docker? After all, virtual machines offer all the advantages I mentioned above. Of course, one reason why you may feel tempted is because of the popularity.

After all, much as we may try to deny it, every profession is prone to following trends. We all want to do what the “cool kids” are doing, and web developers are not immune.

Docker has grown to be one of the most popular open source projects over the last few years, so it certainly has popularity going for it. But there are very real reasons for that popularity.

Let’s examine some of these:

Less Overhead

Running a virtual machine gives you ultimate control over your development environment. But that control comes at a cost – running a second OS inside your host OS requires plenty of resources. A virtual machine has all the requirements of a physical one. It demands processor time, memory, disk space, and so on.

A modern OS is quite a big beast, and Linux is no different. After all, it’s designed to run in many different conditions, and deal with every requirement a user could have.

Docker isn’t a virtual machine. It doesn’t create a “pretend computer” that attaches itself to your PC, leaching the resources and slowing your machine to a crawl. Instead, it uses clever tricks to execute your apps and dependencies through the existing Linux kernel – the one that started when you booted your Ubuntu machine.

So the performance is much better. Your machine is under less strain, and so applications run fast – that includes the code you are working on as well as your regular apps outside the container.

Working on a responsive machine is much nicer than struggling with an unresponsive PC. And it’s better for your productivity, too.

Packaging

Docker wraps your app into a single object which you can deploy as a complete entity. All your dependencies, settings, and configurations are neatly bundled into a single package that you can deploy to any platform capable of running Docker.

This immediately reduces the headaches of deployment, which can be considerable. Hours (or even days) can be wasted fine tuning your production and development environments to get your application running right.

In a worst case scenario, you may find a new application is completely incompatible with essential apps that are already running on the server. In that case, you have to spend days or even weeks tweaking your new app to run on the production machine.

Of course, this is more of an issue for developers working on custom apps – WordPress tends to lower the integration headaches by providing a relatively stable platform to develop on.

For instance, a relatively simple unambitious plugin or theme should work with most versions of WordPress, as the WordPress API is quite reliable. But then again, how many real-world projects are “relatively simple”?

In reality, you’ll come to love Docker for reducing the pain of deployment.

Application-Oriented

As I mentioned, there are plenty of existing solutions for virtualization. These well-known and battle-tested projects have been a mainstay of IT departments for years now. But they are mostly server-oriented. They’re designed for their primary users – server administrators who deal with multiple physical boxes.

That’s why these projects are geared towards provisioning. They automate the manual tasks a server admin would have to do by hand when they set up a new server environment.

As developers, we have different priorities. For us, the application is the most important component. The rest of the server architecture is something we would rather take for granted.

With more traditional server provisioning tools (such as Puppet) we would inherit a large mass of overhead that we don’t need or want. Docker does away with this added complexity by focusing our attention on the job at hand – developing our applications.

Version Tracking

Any developer with more than a couple of projects under their belt knows the value of version tracking. Before the age of version control systems, keeping track of different versions of source code files was a major pain.

Development is an evolutionary process. As we code new features and fix bugs, we constantly change the codebase we are working on. Change is good, but it’s also a source of all kinds of problems. Fixing one bug often adds two or three additional ones. And they aren’t always immediately apparent.

Tracking down which change caused a specific bug can be hard for a single developer. When a team is working on the project, the problem can be magnified by several orders of magnitude.

That’s why git is such an indispensable tool, for single developers and teams alike.

Version tracking is essential for individual source code files. And it’s essential for different versions of your themes and plugins, too. That’s why Docker’s built-in version tracking is a Good Thing.

Public Registry

I’m sure I don’t have to convince you how valuable open-source development is. As a WordPress developer, you gain a massive head start by building on the work of others. Every component in your stack is a complex piece of software that you don’t have to code yourself.

Docker’s registry makes it easier to share code with the community – and to benefit from the work other developers have made available!

What’s more, you aren’t just sharing code snippets or individual files – you can download complete micro-services and even complete WordPress installations. This gets you off to a running start the next time you begin a project.

This is just a small taste of the many benefits that docker brings to the table. While I could go on and spend the rest of this article listing the virtues of Docker, I think you get the point.

So, let’s look at how you can get started

Installing Docker

One of the virtues of using Ubuntu as a development platform is the ease of installing popular packages. And Docker is certainly a popular package!

As long as your machine is connected to the Internet, you can install Docker with a few apt-get commands in the terminal.

But, before you do that, it’s worth doing a little housekeeping.

First, let’s ensure your system meets the minimum requirements. Docker only works on 64 bit systems. And it requires Linux kernel version 3.10 or later. You can check the kernel version in a terminal by typing:

uname -r

The first few digits are the version number – as long as it’s 3.10 or greater, you’re good to go.

The next step is to update your PC’s list of repositories, so it can fetch the latest and most stable versions of the packages you will be installing. You can do this with a single terminal command:

sudo apt-get update

The screen will fill with a bunch of text as APT fetches up-to-date package lists from its repositories. It usually only takes a few seconds.

Next, you have to ensure that APT can install packages over https, using CA certificates. You can install these features with:

sudo apt-get install apt-transport-https ca-certificates

This could take a few minutes depending on how many packages APT has to install.

The next step is to install the GPG key. GPG stands for Gnu Privacy Guard – this is the system that APT will use to decrypt the Docker packages and ensure they are legitimate.

Type the following into the terminal:

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

The next step is to add the Docker repo to APT. You do this by editing or creating a file under /etc/apt/sources.list.d/docker.list

The file is a single line, which contains the address of the repo. You have to enter the correct address for the version of Ubuntu you are using – in our case, we’ll be using the repo for Ubuntu 16.04.

Type the following lines into your terminal:

sudo -i

echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" > /etc/apt/sources.list.d/docker.list

exit

Now you have to update APT’s repo list again, with:

sudo apt-get update

If you had previously installed an old version of docker, you’ll have to purge it with:

sudo apt-get purge lxc-docker

Now let’s check that APT is pulling from the correct repo with:

apt-cache policy docker-engine

You should see a list of packages for Ubuntu Xenial.

Now you can install Docker’s dependancies. Type:

sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

This will install the appropriate Linux image extras package for your kernel version.

Ubuntu 16.04 and above already have most of the other dependencies for Docker. If you’re using an older version of Ubuntu, you’ll have to install the other dependencies manually. You can find a list at:

https://docs.docker.com/engine/installation/linux/ubuntulinux/

Finally, we’re ready to install Docker! All you have to do is type:

sudo apt-get install docker-engine

Sit back and let APT do its magic. If APT runs into any problems (such as missing dependencies) it will give you a helpful description of what went wrong. If you have any missing dependencies in the list, you’ll have to install them one by one using sudo apt-get install.

You may have to reboot your machine after installing these dependencies, especially if they included a new Linux kernel.

As soon as APT finishes the installation, you can start the docker daemon:

sudo service docker start

Finally, test it with:

sudo docker run hello-world

This will run a test container and print a simple message to the terminal.

So, you have Docker installed. The next step is to install Docker Compose. Docker compose is a useful tool for running docker applications, and it simplifies your workflow. It also allows you to compose applications from multiple containers that communicate with each other. This is called a micro-service architecture – but that’s beyond the scope of today’s article.

Docker Compose isn’t in the regular Ubuntu repos, so you’ll have to install it from the GitHub repository. The exact installation instructions vary with each new release, so you should check them out at https://github.com/docker/compose/releases

Here are the steps I followed as I was writing this article:

sudo -i

curl -L https://github.com/docker/compose/releases/download/1.8.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

exit

Now, let’s set up a WordPress environment. Previously, I told you that one of the benefits of Docker is the way other users share their environments to speed up your development work.

There are plenty of WordPress containers you can choose – the one I picked for this article is from visible.vc. They’ve shared it on GitHub, so you can quickly clone the project to your computer:

git clone https://github.com/visiblevc/wordpress-starter your-project-name

cd your-project-name

Now all you have to do is run the container in Docker:

docker-compose up

Docker Compose will download the docker image and start running it. It will take care of all the dependencies, from the MySQL database to the Akismet WordPress plugin.

And there you have it – your own fresh WordPress install, running on localhost:8080.

Using Your WordPress Container

Right now, we have a bare-bones WordPress installation. You probably intend to do more.

So the next step is to start sharing directories between your PC and the Docker package. This allows you to edit the files on your machine, and run them in the package.

You can control the directory sharing from the docker-compose.yml file. This is a simple YAML file that contains all the settings you need. YAML is a lightweight human readable data format, so you’ll have no difficulty getting to grips with it.

Open it up in your favorite editor, and turn your attention to the “volumes” section at the end. Right now it says:
data: {}

We want to change that to share directories from the project. You should already have a directory containing a starter theme in the ./wp-content/themes/the-theme directory. You might want to rename that later, but for now, let’s use it as it is.

Replace the contents of the volumes section with:

- ./data:/data
- ./wp-content/themes/the-theme:/app/wp-content/themes/the-theme

Inside the container, the directory appears as /app/wp-content/themes/the-theme. WordPress sees this as one of the installed themes, and you can use it in the normal way.

You can also setup the WordPress database to your likings by editing the contents of /data/database.sql before you run docker-compose up. You can also use the WP Migrate DB plugin, just like you would with any other working WordPress installation.

There are more instructions for this container at https://github.com/visiblevc/wordpress-starter.

Conclusion

So, you’ve learned why you should use Docker, and you’ve set up your first WordPress development container. You’ve taken your first steps towards a more streamlined and stable workflow! There is more to learn about Docker, but you know enough to get started on your first project. Happy coding!

source: HERE

In order to migrate the files on a VPS box from one provider to another, one of the best solutions for the job is to use a tool called Rsync.

Let’s take a closer look at Rsync and see how it can help you rapidly move virtual private servers from one provider to another.

About Rsync

Rsync is one of the most common ways to copy/backup/restore files and folders between two locations, regardless if these endpoints are local or remote servers.

It supports compression, encryption and incremental transfer which makes the app an extremely versatile and useful tool for systems administrators.

Note: Running Rsync does not require you to be logged in as root.

Since Rsync supports incremental file transfer, the first time you will run it, it will require the same time to copy all the files as any other command However, when you subsequently execute Rsync, it will determines which changes have been made and only transfers those files.

This mechanism is designed to save time for the system admin, load and bandwidth.
Getting Acquainted with Rsync

Let’s get started and copy our live running server in a new location in 5 easy steps.

Step 1) Ensure that your OS is in Place

In order to migrate your server to a new location, the first step is to install your operating system onto new infrastructure. You can determine what architecture your current server is running on with the following command:

uname –a

You will want to use a Linux distro and kernel as close as possible to the one installed on server you are migrating.

Most VPS providers will set this up for you when you buy your new VPS server.

Tip: You may only need to do this step if you are building out a server locally. Hypervisors such as VMware help administrators streamline the process of installing operating systems onto hardware.

Step 2) Check the connection between the 2 servers

Once you have the two systems up and running you will need to check to see if it will be possible to make a connection between the two servers.

You can easily do that with the command “ssh”. Assuming you are running SSH from the new server and trying to connect to the old one, if the old server asks for the password, you passed the test!

ssh user@oldserver

Step 3) Check Rsync

At this point you will have to verify that Rsync is installed on both systems and if not, it’s time to install it. You can check if the command is present in the following way:

which rsync

In case the tool should not be present, you can easily install it using the following commands:

apt-get install rsync (on Ubunbu based distros)
yum install rsync (on CentOS based distros)

Step 4) Prepare the Exclude List

You will now only need to decide which directories to exclude. This may vary from system to system, but I would never suggest you to include the following unless differently needed:

/etc/fstab
/etc/sysconfig/network-scripts/* (CentOS distros)
/etc/network/* (Ubuntu distros)
/proc/*
/tmp/*
/sys/*
/dev/*
/mnt/*
/boot/*

Step 5) Run Rsync

Some VPS administrators may be worried about running Rsync while a MySQL instance running.

In most cases, this won’t present a problem. You might consider running it outside of heavy load periods if your server is hosting a live system, but other than that you should not have any problems.

Of course you will not have a consistent copy of the DB unless you stop the service before beginning Rsync, so please keep that in mind.

In this instance, Rsync will create a copy and it will allow you to test the system on the new server, which is usually always a big plus.

An Example of Rsync at Work

Assuming you are logged into the destination server, you’d implement a command that looks like this:

rsync -auHxv –numeric-ids –exclude=/etc/fstab –exclude=/etc/network/* –exclude=/proc/* –exclude=/tmp/* –exclude=/sys/* –exclude=/dev/* –exclude=/mnt/* –exclude=/boot/* –exclude=/root/* root@SRC-IP:/* /

Once it finishes, simply reboot your destination server and you will notice that you will have a precise copy of the files located on your source VPS.

*source: HERE

In this tutorial I’ll explain how to install and configure RAINLOOP webmail interface with Apache.

  • Modern user interface.
  • Complete support of IMAP and SMTP protocols including SSL and STARTTLS.
  • Sieve scripts (Filters and vacation message).
  • Minimalistic resources requirements.
  • Direct access to mail server is used (mails are not stored locally on web server).
  • Allows for adding multiple accounts to primary one, simultaneous access to different accounts in different browser tabs is supported. Additional identities.
  • Administrative panel for configuring main options.
  • Really simple installation and update (the product is updated from admin panel).
  • Integration with Facebook, Google, Twitter and Dropbox.
  • Managing folders list.
  • Simple look’n’feel customization.
  • Configurable multi-level caching system.
  • Extending functionality with plugins installed through admin panel.
  • Perfect rendering of complex HTML mails.
  • Drag’n’drop for mails and attachments.
  • Keyboard shortcuts support.
  • Autocompletion of e-mail addresses.

I. Installation 

Download the package, extract files from it and upload it to a directory intended for use by the application, for example, /var/www/rainloop, subsequent examples assume you’re using that directory, supply your actual directory path if you use a different one.

mkdir /var/www/rainloop && cd /var/www/rainloop
wget https://www.rainloop.net/repository/webmail/rainloop-community-latest.zip
unzip rainloop-community-latest.zip
Grant read/write permissions required by the application:
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;chown -R www-data:www-data .

Setup a new site with apache with this similar configuration:

<VirtualHost mail.example.com:80>
DocumentRoot /var/www/rainloop

ServerName mail.example.com

ErrorLog "/var/log/rainloop-error_log"
TransferLog "/var/log/rainloop-access_log"

<Directory />
Options +Indexes +FollowSymLinks +ExecCGI
AllowOverride All
Order deny,allow
Allow from all
Require all granted
</Directory>

</VirtualHost>

*Enable module rewrite for Apache, if it’s not enabled.

II. Configuration

To configure the product, use admin panel found at: http://mail.example.com/?admin

1.There are two ways to configure the product – with admin panel, or by modifying application.ini file manually.

Web interface allows for configuring basic options only, and that should suffice in most cases. But when modifying configuration manually, you’ll get access to all configuration options including experimental ones.

To access admin panel, use URL of the following kind: http://mail.example.com/?admin

Default login is “admin”, password is “12345”.

2.Configuration file application.ini is found within directory structure of a special kind, like this:
/var/www/rainloop/data/_data_/_default_/configs/application.ini

_default_ – is a subdirectory in a single domain installation, in case of multiple domain installaton, your web domain is placed instead of “_default_”.

The “application.ini” file is composed using typical structure of INI files, its configuration options are described inline in full.

That’s it, enjoy !!!

In this tutorial I’ll guide you how to install and configure a mail system(Dovecot and Postfix) on Ubuntu 16.04 with ViMbAdmin as front-end for managing your domains.

At the end of this process, you’ll have:

  • ViMbAdmin installed and managing your virtual domains, mailboxes and aliases;
  • Postfix installed and configured for: Email delivery / acceptance to your virtual mailboxes and aliases;
  • TLS available on port 25;
  • SSL on port 465;
  • Email relay to authenticated users only.
  • Dovecot installed and configured for: IMAP over SSL;
  • POP3 over SSL;
  • ManageSieve with TLS support;
  • LMTP for local mail delivery to your virtual mailboxes.

Preparation:

Install required packages and dependancies:

apt-get install --yes php7.0-cgi php7.0-mcrypt php-memcache php7.0-mysql \
php7.0-json libapache2-mod-php7.0 php-gettext memcached git mysql-server \
subversion

PHP composer can be installed via:

php -r "readfile('https://getcomposer.org/installer');" | php
mv composer.phar /usr/local/bin/composer

Set your timezone in /etc/php/7.0/apache2/php.ini and /etc/php/7.0/cli/php.ini , such as:

date.timezone = "UTC"

I. ViMbAdmin

export INSTALL_PATH=/srv/vimbadmin
git clone https://github.com/opensolutions/ViMbAdmin.git $INSTALL_PATH
cd $INSTALL_PATH
composer install --dev

If you plan to run under Apache / other web server, ensure you set the ownership on the var/ directory appropriately:

chown -R www-data: $INSTALL_PATH/var

Database Setup

Log into your MySQL (or other) database and create a new user and database:

CREATE DATABASE `vimbadmin`;
GRANT ALL ON `vimbadmin`.* TO `vimbadmin`@`localhost` IDENTIFIED BY 'password';
FLUSH PRIVILEGES;

Configuration

cp $INSTALL_PATH/application/configs/application.ini.dist $INSTALL_PATH/application/configs/application.ini

You now need to set your database parameters from above in this file. You’ll find these near the top and here is an example:

resources.doctrine2.connection.options.driver = 'pdo_mysql'
resources.doctrine2.connection.options.dbname = 'vimbadmin'
resources.doctrine2.connection.options.user = 'vimbadmin'
resources.doctrine2.connection.options.password = 'password'
resources.doctrine2.connection.options.host = 'localhost'
cp $INSTALL_PATH/public/.htaccess.dist $INSTALL_PATH/public/.htaccess

Database Creation

cd $INSTALL_PATH
./bin/doctrine2-cli.php orm:schema-tool:create

If all goes well, you should see:

$ ./bin/doctrine2-cli.php orm:schema-tool:create
ATTENTION: This operation should not be executed in a production environment.

Creating database schema...
Database schema created successfully!

Apache2

You need to tell Apache where to find ViMbAdmin and what URL it should be served under. In this example, we’re going to serve it from /vimbadmin (e.g. www.example.com/vimbadmin). As such, we create an Apache configuration block as follows on our web server:

Alias /vimbadmin /srv/vimbadmin/public

<Directory /srv/vimbadmin/public>
Options FollowSymLinks
AllowOverride FileInfo

# For Apache <= 2.3:
Order allow,deny
allow from all

# For Apache >= 2.4
# Require all granted 
</Directory>

Ensure mod_rewrite is enabled:

a2enmod rewrite

Restart Apache and you can now browse to your new installation.

Welcome to Your New ViMbAdmin Installation
You should now be greeted with a page welcoming you. If you didn’t set the security salt above, then the installer will provide random strings for these. Place this in vimbadmin/application/configs/application.ini as instructed before continuing. If you did set it, then enter it in the Security Salt input box.

This is a security step to ensure that only the person performing the installation can create a super administrator.

Now enter a username (which must be an email address) and a password.

Once you click save, you’re done! Log in and work away.

II.Dovecot

Dovecot will provide support for:

  • IMAP mail access;
  • POP3 mail access;
  • the manage sieved service;
  • the local delivery protocol (LMTP) – Postfix passes emails it accepts for local delivery off to this process to be stored on the filesystem.

Install the Dovecot related packages via:

apt-get install --yes dovecot-core dovecot-imapd dovecot-managesieved \
dovecot-pop3d dovecot-sieve dovecot-mysql \
dovecot-lmtpd mail-stack-delivery

We will store all emails under /srv/vmail and we need to create a user with the appropriate uid and gid used in this example:

groupadd -g 2000 vmail
useradd -c 'Virtual Mailboxes' -d /srv/vmail -g 2000 -u 2000 -s /usr/sbin/nologin -m vmail

Configuring Dovecot

Remove (clear) an unnecessary file which will interfere with our configuration:

echo "" >/etc/dovecot/conf.d/99-mail-stack-delivery.conf

Go to /etc/dovecot/conf.d and replace the contents of these files:

*don’t forget to replace mail.example.com with your domain

dovecot-10-auth.conf

auth_mechanisms = plain login
!include auth-sql.conf.ext
Raw
dovecot-10-mail.conf
mail_location = maildir:/srv/vmail/%d/%n

namespace inbox {
inbox = yes
}

mail_uid = 2000
mail_gid = 2000

mail_privileged_group = vmail

first_valid_uid = 2000
last_valid_uid = 2000

maildir_copy_with_hardlinks = yes

dovecot-10-master.conf

service imap-login {
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}

service_count = 0
}

service pop3-login {
inet_listener pop3 {
port = 110
}
inet_listener pop3s {
port = 995
ssl = yes
}
}

service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
}
}

service imap {
}

service pop3 {
}

service auth {
unix_listener auth-userdb {
mode = 0666
user = vmail
group = vmail
}

# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
}

service auth-worker {
}

service dict {
unix_listener dict {
}
}

dovecot-10-ssl.conf

ssl = yes

ssl_cert = </etc/postfix/ssl/mail.example.com.pem
ssl_key = </etc/postfix/ssl/mail.example.com.pem

ssl_require_crl = no

dovecot-15-lda.conf

postmaster_address = postmaster@example.com
hostname = mail.example.com
quota_full_tempfail = yes
recipient_delimiter = +
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes

protocol lda {
mail_plugins = $mail_plugins sieve quota
}

dovecot-20-imap.conf

protocol imap {
mail_plugins = $mail_plugins quota imap_quota
}

dovecot-20-lmtp.conf

protocol lmtp {
postmaster_address = postmaster@example.com
mail_plugins = quota sieve
}

dovecot-20-managesieve.conf

service managesieve-login {
inet_listener sieve {
port = 4190
}

service_count = 1
}

service managesieve {
}

protocol sieve {
}

dovecot-20-pop3.conf

protocol pop3 {
mail_plugins = $mail_plugins quota
}

dovecot-auth-sql.conf.ext

passdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
}

userdb {
driver = prefetch
}

userdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
}

dovecot-sql.conf.ext

driver = mysql

connect = host=localhost user=vimbadmin password=password dbname=vimbadmin
default_pass_scheme = MD5

password_query = SELECT username as user, password as password, \
homedir AS userdb_home, maildir AS userdb_mail, \
concat('*:bytes=', quota) as userdb_quota_rule, uid AS userdb_uid, gid AS userdb_gid \
FROM mailbox \
WHERE username = '%Lu' AND active = '1' \
AND ( access_restriction = 'ALL' OR LOCATE( '%Us', access_restriction ) > 0 )

user_query = SELECT homedir AS home, maildir AS mail, \
concat('*:bytes=', quota) as quota_rule, uid, gid \
FROM mailbox WHERE username = '%u'

/etc/dovecot/dovecot.conf

!include_try /usr/share/dovecot/protocols.d/*.protocol
!include conf.d/*.conf
!include_try local.conf

III.Postfix

We will configure Postfix for the following purposes here:

  • accept mail for the domains / mailboxes / aliases configured in ViMbAdmin;
  • hand these messages off to Dovecot’s deliver – a local delivery agent;
  • allow mailboxes configured in ViMbAdmin to log into Postfix to relay mail.

First, we need to install the following packages:

apt-get install postfix postfix-mysql

When you are asked to choose a general type of mail configuration, choose No configuration. This should hopefully make these instructions reasonably generic.

Configuring Postfix

Replace /etc/postfix/main.cf with:

*don’t forget to replace mail.example.com with your domain

# Sample Postfix configuration for use with ViMbAdmin :: Virtual Mailbox Administration
 #
 # See: https://github.com/opensolutions/ViMbAdmin
 #
 # By Barry O'Donovan - 2014-02 - http://www.barryodonovan.com/

# See /usr/share/postfix/main.cf.dist for a commented, more complete version

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
 biff = no

# appending .domain is the MUA's job.
 append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
 delay_warning_time = 4h

readme_directory = no

# TLS parameters
 smtpd_tls_cert_file = /etc/postfix/ssl/mail.example.com.pem
 smtpd_tls_key_file = /etc/postfix/ssl/mail.example.com.pem
 smtpd_use_tls = yes
 smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache
 smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_scache
 smtpd_tls_loglevel = 1
 smtpd_tls_auth_only = yes
 smtpd_tls_dh1024_param_file = /etc/postfix/dh_1024.pem
 smtpd_tls_dh512_param_file = /etc/postfix/dh_512.pem
 smtpd_tls_eecdh_grade = strong

myhostname = mail.example.com

myorigin = mail.example.com

mydestination = localhost localhost.$mydomain

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128

mailbox_size_limit = 0
 recipient_delimiter = +

inet_protocols = all
 inet_interfaces = all

notify_classes = resource, software
 error_notice_recipient = admin@example.com

# relay_domains =
 # transport_maps = hash:/etc/postfix/transport
 virtual_alias_maps = mysql:/etc/postfix/mysql/virtual_alias_maps.cf
 virtual_gid_maps = static:2000
 virtual_mailbox_base = /srv/vmail
 virtual_mailbox_domains = mysql:/etc/postfix/mysql/virtual_domains_maps.cf
 virtual_mailbox_maps = mysql:/etc/postfix/mysql/virtual_mailbox_maps.cf
 virtual_minimum_uid = 2000
 virtual_uid_maps = static:2000
 #dovecot_destination_recipient_limit = 1
 virtual_transport = lmtp:unix:private/dovecot-lmtp

smtpd_sasl_auth_enable = yes
 smtpd_sasl_type = dovecot
 smtpd_sasl_path = private/auth
 broken_sasl_auth_clients = yes
 message_size_limit = 40000000
 home_mailbox = Maildir/
 smtpd_sasl_authenticated_header = yes
 smtpd_sasl_security_options = noanonymous
 smtpd_sasl_local_domain = $myhostname
 #mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}"

smtp_use_tls = yes
 smtpd_tls_received_header = yes
 smtpd_tls_mandatory_protocols = SSLv3, TLSv1
 smtpd_tls_mandatory_ciphers = medium
 tls_random_source = dev:/dev/urandom
 smtpd_recipient_restrictions =
  reject_unknown_sender_domain,
  reject_unknown_recipient_domain,
  reject_unauth_pipelining,
  permit_mynetworks,
  permit_sasl_authenticated,
  reject_unauth_destination
 # reject_non_fqdn_hostname,
 # reject_invalid_hostname

#smtpd_helo_restrictions =
 # check_helo_access hash:/etc/postfix/ehlo_whitelist,
 # reject_non_fqdn_hostname,
 # reject_invalid_hostname
 # check_helo_access hash:/etc/postfix/ehlo_whitelist,
 # reject_unknown_helo_hostname

smtpd_helo_required = yes

smtpd_sender_restrictions =
  reject_unknown_sender_domain
 # check_sender_access hash:/etc/postfix/sender_access,

smtpd_data_restrictions =
  reject_unauth_pipelining

smtpd_client_restrictions =
  permit_sasl_authenticated
 # check_client_access hash:/etc/postfix/client_access,
 # reject_rbl_client zen.spamhaus.org

You need to edit /etc/postfix/master.conf to enable smtps (SMTP over SSL on port 465′ TLS is supported over port 25 as part of our configuration):

smtps inet n - - - - smtpd
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_sasl_type=dovecot
  -o smtpd_sasl_path=private/auth
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING

ViMbAdmin Integration

Postfix integrates with our ViMbAdmin database via settings in the above Gist and by creating the following files form the samples provided (all under /etc/postfix/mysql):

  • virtual_alias_maps.cf
  • virtual_domains_maps.cf
  • virtual_mailbox_maps.cf
  • virtual_transport_maps.cf

virtual_alias_maps.cf

user = vimbadmin
 password = password
 hosts = 127.0.0.1
 dbname = vimbadmin
 query = SELECT goto FROM alias WHERE address = '%s' AND active = '1'
virtual_domains_maps.cf
 user = vimbadmin
 password = password
 hosts = 127.0.0.1
 dbname = vimbadmin
 query = SELECT domain FROM domain WHERE domain = '%s' AND backupmx = '0' AND active = '1'
virtual_mailbox_maps.cf
 user = vimbadmin
 password = password
 hosts = 127.0.0.1
 dbname = vimbadmin
 table = mailbox
 select_field = maildir
 where_field = username
virtual_transport_maps.cf
 user = vimbadmin
 password = password
 hosts = 127.0.0.1
 dbname = vimbadmin
 table = domain
 select_field = transport
 where_field = domain
 additional_conditions = and backupmx = '0' and active = '1'

Postfix with SSL
The above referenced Gist includes support for TLS/SSL (encrypted) support with Postfix. We can create a self-signed certificate for testing as follows.

When asked to enter Common Name (eg, YOUR name) []:, ensure you enter the fully qualified name of your mail server:

*don’t forget to replace mail.example.com with your domain

mkdir -p /etc/postfix/ssl
 openssl req -new -x509 -days 3650 -nodes \
  -out /etc/postfix/ssl/mail.example.com.pem \
  -keyout /etc/postfix/ssl/mail.example.com.pem
 chmod 0600 /etc/postfix/ssl/mail.example.com.pem

We also need to create the Diffe Hellman parameters:

for len in 512 1024; do
  openssl genpkey -genparam -algorithm DH -out /etc/postfix/dh_${len}.pem \
  -pkeyopt dh_paramgen_prime_len:${len}
 done

Enjoy !!!

source: https://github.com/opensolutions/ViMbAdmin/wiki

Acest tutorial explică instalarea și configurarea aplicației Gerrit Code Review pe Ubuntu 14.04

 

Ce este Gerrit Code Review?

Gerrit este o aplicație web, pentru revizuirea codului. Dezvoltatorii de software dintr-o echipă pot examina fiecare modificările celorlați asupra codului sursă utilizând un browser web și având posibilitatea de a aproba sau a respinge modificările respective. Gerrit este intergat cu Git, un sistem de control al versiunilor.

 

Documentația oficială se află aici.

 

Pachete necesare:

 

openjdk-7-jre nginx mysql-server gitweb git git-core
*p.s: se poate folosi și Apache, dar eu am preferat Nginx.

 

Aveți nevoie și de un server SMTP funcțional. Puteți folosi acest tutorial

 

Creare user pentru Gerrit:

useradd -m -d /home/gerrit -s /bin/bash  -U gerrit
passwd gerrit
Enter new UNIX password: #parola
Retype new UNIX password: #parola
passwd: password updated successfully

 

Configurare Nginx:

 

1. Se creează un nou site :

 

nano /etc/nginx/sites-available/gerrit

configurație:

server {
 listen 80;
 server_name gerrit.exemplu.ro; #url-ul site-ului

 error_log /home/gerrit/gerrit/logs/gerrit-proxy-error.log;
 root /home/gerrit/gerrit/;

 location @gerrit {
 sendfile off;
 proxy_pass http://127.0.0.1:8082;
 proxy_redirect default;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_max_temp_file_size 0;

 #this is the maximum upload size
 client_max_body_size 10m;
 client_body_buffer_size 128k;
 proxy_connect_timeout 90;
 proxy_send_timeout 90;
 proxy_read_timeout 90;

 proxy_buffer_size 4k;
 proxy_buffers 4 32k;
 proxy_busy_buffers_size 64k;
 proxy_temp_file_write_size 64k;
 }

 location / { try_files $uri @gerrit; }

}

 

2. Activare site:

 

ln -s /etc/nginx/sites-available/gerrit /etc/nginx/sites-enabled/gerrit
service nginx restart

* Restarting nginx nginx                                                 [ OK ]

 

Creare bază de date MySQL:

 

mysql -p

Enter password: #parola_root_mysql

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 50
Server version: 5.5.43-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

Inserați aceste comenzi:

 

CREATE USER 'gerrit'@'localhost' IDENTIFIED BY 'parola_baza_de_date';
CREATE DATABASE reviewdb;
GRANT ALL ON reviewdb.* TO 'gerrit'@'localhost';
FLUSH PRIVILEGES;
exit;

 

Inițializare Gerrit:

 

1.Creare director:

 

root@vps2:/# su gerrit
gerrit@vps2:/$ cd ~
gerrit@vps2:~$ mkdir gerrit
gerrit@vps2:~$ cd gerrit/
gerrit@vps2:~/gerrit$

 

2.Descărcare Gerrit:

 

wget http://gerrit-releases.storage.googleapis.com/gerrit-2.11.3.war

Compilările mai recente(nightly), le gasiți AICI.

 

3.Instalare / configurare Gerrit:

 

java -jar gerrit-2.11.war init -d ~/gerrit

Using secure store: com.google.gerrit.server.securestore.DefaultSecureStore

*** Gerrit Code Review 2.11
***
*** Git Repositories
***

Location of Git repositories [git]: #Apasa Enter

*** SQL Database
***

Database server type [h2]: mysql
Server hostname [localhost]: #Apasa Enter
Server port [(mysql default)]: #Apasa Enter
Database name [reviewdb]: #Apasa Enter
Database username [gerrit]: #Apasa Enter
gerrit's password : #Parola baza de date (reviewdb)
confirm password : 

*** Index
***

Type [LUCENE/?]: #Apasa Enter

The index must be rebuilt before starting Gerrit:
java -jar gerrit.war reindex -d site_path

*** User Authentication
***

Authentication method [OPENID/?]: #Apasa Enter

*** Review Labels
***

Install Verified label [y/N]? n

*** Email Delivery
***

SMTP server hostname [localhost]: #Apasa Enter
SMTP server port [(default)]: #Apasa Enter
SMTP encryption [NONE/?]: #Apasa Enter
SMTP username : #Apasa Enter

*** Container Process
***

Run as [gerrit]: #Apasa Enter
Java runtime [/usr/lib/jvm/java-7-openjdk-amd64/jre]: #Apasa Enter
Copy gerrit-2.11.war to /home/gerrit/gerrit/bin/gerrit.war [Y/n]? #Apasa Enter
Copying gerrit-2.11.war to /home/gerrit/gerrit/bin/gerrit.war

*** SSH Daemon
***

Listen on address [*]: gerrit.exemplu.ro
Listen on port [29418]: #Apasa Enter

Gerrit Code Review is not shipped with Bouncy Castle Crypto SSL v151
If available, Gerrit can take advantage of features
in the library, but will also function without it.
Download and install it now [Y/n]? y
Downloading http://www.bouncycastle.org/download/bcpkix-jdk15on-151.jar ... OK
Checksum bcpkix-jdk15on-151.jar OK

Gerrit Code Review is not shipped with Bouncy Castle Crypto Provider v151
** This library is required by Bouncy Castle Crypto SSL v151. **
Download and install it now [Y/n]? y
Downloading http://www.bouncycastle.org/download/bcprov-jdk15on-151.jar ... OK
Checksum bcprov-jdk15on-151.jar OK
Generating SSH host key ... rsa... dsa... done

*** HTTP Daemon
***

Behind reverse proxy [y/N]? y
Proxy uses SSL (https://) [y/N]? #Apasa Enter
Subdirectory on proxy server [/]: #Apasa Enter
Listen on address [*]: 127.0.0.1
Listen on port [8081]: 8082
Canonical URL [http://null/]: http://gerrit.exemplu.ro/

*** Plugins
***

Installing plugins.
Install plugin download-commands version v2.11 [y/N]? y
Install plugin reviewnotes version v2.11 [y/N]? n
Install plugin singleusergroup version v2.11 [y/N]? y
Install plugin replication version v2.11 [y/N]? y
Install plugin commit-message-length-validator version v2.11 [y/N]? n
Initializing plugins.
No plugins found with init steps.

Initialized /home/gerrit/gerrit

 

4. Descarcăre / instalare plugin GitHub:

 

wget -O ~/gerrit/plugins/github-plugin-2.11.jar https://ci.gerritforge.com/view/Plugins-stable-2.11/job/Plugin_github_stable-2.11/lastSuccessfulBuild/artifact/github-plugin/target/github-plugin-2.11.jar

wget -O ~gerrit/gerrit/lib/github-oauth-2.11.jar https://ci.gerritforge.com/view/Plugins-stable-2.11/job/Plugin_github_stable-2.11/lastSuccessfulBuild/artifact/github-oauth/target/github-oauth-2.11.jar

 

5. Activare plugin GitHub:

 

Aplicația Gerrit trebuie inregistrată pe GitHub AICI.

gerrit_github

java -jar gerrit-2.11.war init -d ~/gerrit 

Using secure store: com.google.gerrit.server.securestore.DefaultSecureStore

*** Gerrit Code Review 2.11
***
*** Git Repositories
***

Location of Git repositories [git]: #Apasa Enter

*** SQL Database
***

Database server type [mysql]:#Apasa Enter
Server hostname [localhost]:#Apasa Enter
Server port [(mysql default)]:#Apasa Enter
Database name [reviewdb]:#Apasa Enter
Database username [gerrit]:#Apasa Enter
Change gerrit's password [y/N]?#Apasa Enter

*** Index
***

Type [LUCENE/?]:#Apasa Enter

The index must be rebuilt before starting Gerrit:
java -jar gerrit.war reindex -d site_path

*** User Authentication
***

Authentication method [OPENID/?]: HTTP
Get username from custom HTTP header [y/N]? Y
Username HTTP header [SM_USER]: GITHUB_USER
SSO logout URL : /oauth/reset

*** Review Labels
***

Install Verified label [y/N]?#Apasa Enter

*** Email Delivery
***

SMTP server hostname [localhost]:#Apasa Enter
SMTP server port [(default)]:#Apasa Enter
SMTP encryption [NONE/?]:#Apasa Enter
SMTP username :#Apasa Enter

*** Container Process
***

Run as [gerrit]:
Java runtime [/usr/lib/jvm/java-7-openjdk-amd64/jre]:#Apasa Enter
Upgrade /home/gerrit/gerrit/bin/gerrit.war [Y/n]?#Apasa Enter
Copying gerrit-2.11.war to /home/gerrit/gerrit/bin/gerrit.war

*** SSH Daemon
***

Listen on address [gerrit.exemplu.ro]:
Listen on port [29418]:

*** HTTP Daemon
***

Behind reverse proxy [Y/n]?#Apasa Enter
Proxy uses SSL (https://) [y/N]?#Apasa Enter
Subdirectory on proxy server [/]:#Apasa Enter
Listen on address : 127.0.0.1 Listen on port [8081]: 8082
Canonical URL [http://gerrit.exemplu.ro]:

*** Plugins
***

Installing plugins.
Install plugin download-commands version v2.11 [y/N]?#Apasa Enter
Install plugin reviewnotes version v2.11 [y/N]?#Apasa Enter
Install plugin singleusergroup version v2.11 [y/N]?#Apasa Enter
Install plugin replication version v2.11 [y/N]?#Apasa Enter
Install plugin commit-message-length-validator version v2.11 [y/N]?#Apasa Enter
Initializing plugins.

*** GitHub Integration
***

GitHub URL [https://github.com]:#Apasa Enter
GitHub API URL [https://api.github.com]:#Apasa Enter

NOTE: You might need to configure a proxy using http.proxy if you run Gerrit behind a firewall.

*** GitHub OAuth registration and credentials
***

Register Gerrit as GitHub application on:
https://github.com/settings/applications/new

Settings (assumed Gerrit URL: http://gerrit.exemplu.ro/)
* Application name: Gerrit Code Review
* Homepage URL: http://gerrit.exemplu.ro/
* Authorization callback URL: http://gerrit.exemplu.ro/oauth

After registration is complete, enter the generated OAuth credentials:
GitHub Client ID : #Vezi Github
GitHub Client Secret : #Vezi Github
confirm password :
Gerrit OAuth implementation [HTTP/?]:#Apasa Enter
HTTP Authentication Header [GITHUB_USER]:#Apasa Enter

Initialized /home/gerrit/gerrit

 

6. Activare git garbage collection:

 

Se adauga în fișierul ~/gerrit/etc/gerrit.config :

 

[gc]
startTime = Fri 12:00
interval = 4 day

 

7. Reindexare Gerrit:

 

java -jar gerrit-2.11.war reindex -d ~/gerrit
[2015-05-09 16:19:42,217] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 124m
[2015-05-09 16:19:43,104] INFO com.google.gerrit.server.cache.h2.H2CacheFactory : Enabling disk cache /home/gerrit/gerrit/cache
Reindexing changes: done
Reindexed 0 changes in 0.0s (0.0/s)
[2015-05-09 16:19:45,626] INFO com.google.gerrit.server.cache.h2.H2CacheFactory : Finishing 4 disk cache updates

 

8. Pornire Gerrit:

 

~/gerrit/bin/gerrit.sh start

 

9.Replication (încarcăre pe GitHub):

 

Proiectele se creaza în Gerrit sub formatul: nume_cont_github/repository 

Se crează un fișier nou  ~/gerrit/etc/replication.config cu urmatorul conținut:

 

[remote "github"]
url = git@github.com:${name}.git 
push = +refs/heads/*:refs/heads/*
push = +refs/tags/*:refs/tags/*
push = +refs/*:refs/*
timeout = 30
threads = 2
createMissingRepositories = false
replicationDelay = 1

 

10. Repornire Gerrit:

~/gerrit/bin/gerrit.sh restart

 

Dacă totul a mers conform tutorialului, ar trebui ca Gerrit să funcționeze corespunzător.                                                                                     

 

Felicitări ! ! !

 

 

 

 

Tutorialul este inspirat de AICI