Vmware LVM storage expand

cd /sys/class/block/sdb/
echo 1 > device/rescan
pvresize /dev/sdb
lvextend -l +100%FREE /dev/local/mysql
resize2fs /dev/local/mysql

Advertisements

Why shared storage environment sucks

In our environment everything run on dell servers with vmware on them, connected to a redundant netapp storage with plenty of fast SAS drives. Typical HA vmware solution with plenty of performance.

After a bug in glibc was announced, I found a sufficient fix and decided to apply it globaly. Therefore I just executed yum update glibc on all 70 VMs we have and suddenly everything stoped responding.

The infrastructure just wasn’t able to handle such an io load and everything stoped working.

I then tested the same scenario on very similar environment based on local hard drives and kvm and although the io load was high, everything kept working properly.

I am now waiting for the official report about what has happened as the only response I got from their support was “We don’t know, let us know when it happenes again”.

[Update #1]: Yep, just another lag on Netapp. Shared storage environments really do not have future! Guys, it’s time to leave enterprice solutions and start using wicked technologies.

Gitlab-ci runner automatic deployment

I have been playing with gitlab for a while. As I mentioned several times before, they do amazing product that fits most of the situations. However, their deployment process is not what I would have expected. I understand that requiring ruby 2+ is a big challenge as it is not default ruby in debian word.

Anyway, Centos/RHEL released version 7 and there is default version of ruby 2+, therefore we can nicely automate the deploy of gitlab.

You can find prepared rpm packaging system here: https://github.com/yarikdot/gitlab-ci-runner-rpm-build

Feel free to get in touch if anything is not working.

Server hosting environment is not about uptime anymore

We have been using private cloud services based on some shared storage and vmware for a while. It is very stable though. However, we reached limit of available capacity and we needed to deploy few more servers. Not a big deal, we just asked our provider to add another host and expected to be able continue installing next day. We have very good payment history and don’t pay them small amount of money. Therefore I didn’t expect any issues. The answer shocked me – it took them 1 week to prepare an offer and another 2 weeks to deploy the server. I don’t understand that to be honest. We didn’t want anything special, just another host.

My country is small and competition in the server bussiness is high. In my previous work when we asked our rack hosting provider to lend us 10 servers, beacuse we had performance issues, we got them within ONE HOUR. That means – managed payment stuff, racked, cabled, prepared for deploy with IPMI information.

I got a call from our cloud provider today informing us that the server is ready in the datacenter, but they connected it with not working cable and it will be fixed another day. I am not going to blame them for not testing it, but I am wondering what is wrong with them. Cable change can be done even by a stupid monkey. What problem is to call the datacenter and use remote hands service?

The same thing is with spare hardware. Everytime I see some hosting company that they have hardware in their office, I am asking why hell on earth don’t you have it racked and prepared to deploy? It will greatly decrease deployment time when a customer orders server. You can rack all the servers when guy in the datacenter has time, perfectly prepare cables to decrease mess. When an order comes, you just change memory/hdds, configure correct VLAN (if it is not done automatically) and everything is deployed within minutes.

Last issue I had was with their support. Everytime I call them I have to go through customer check which usualy takes 5 minutes, because they can’t find us in the database. Even phones few years ago supported importing contacts from some external service. They have my phone number in their database. The only thing they have to do is translate data from the database to the phone. It would be amazing if I called them and the second side answered me: “Hello Peter, Joe from ServerHosting speaking. How can I help you?”

Hosting environment is small and even if you are a big player, small things like this can impress your customers and make them feel better with you.

Github

I love using tools made by other people, because there is no reason to reinvent wheel. However, some times I can’t find anything usable. To help the community grow, I have decided to publish some of my work on github. Feel free to check my github.

There is not so much currently, but I am planning to publish some zabbix plugins I made, deb repository management tools and plenty other handy tools I use every day.

Scaling PHP

Before we started our project, I am personaly proud of the most, we didn’t know much about how to scale php applications. At the beggining there were only 2 of us – 2 linux linux guys with some php knowledge and few servers. Nevertheless, we weren’t afraid of facing strange issues. 4 years later, we have one of the biggest sites in our country.

You can’t even imagine how many downtimes we had due to lack of knowledge. I regret not having read “Scaling PHP book” earlier. There is described everything we went through – which also means everything that caused our downtimes.

Even though I don’t read much, I spent 30 minutes a day when I was in a tram/tube to/from work and I finished this book in 3 weeks.

If you still work on yourself (which you definetely might) this book is worth reading.

Thank you Steven for writing such a great book.

Understanding gitlab installation

It took me a while to understand how to install gitlab on my server. The simplest part of gitlab to install is gitlab-ci runner as it doesn’t require any database, …

First of all I want support developers in using the latest technology, I don’t like supporting old stuff. Nevertheless, on the other hand I now understand why omnibus packages are so shity.

The biggest pain in ass is ruby. We are currently running on Centos 6 (latest stable release), which has Ruby 1.8.7 as a default version. We also manage all servers via puppet. The cool thing is that puppet instalation is really smooth. Just add puppet repository, type yum install puppet and it will install ruby automaticaly.

The issue comes when you want to use puppet (which uses ruby 1.8.7) and gitlab (which uses ruby 2.0.0+). You must have two ruby instalations to support both of them.

Many friends of mine struggle from gitlab instalation. They are mostly php developers and debugging what is missing and why it is not working is extremely difficult for them.

My personal advice for Gitlab developers is to backport their software to run (at least) on ruby 1.8.7, which is currently centos stable and ruby 1.9.3 (debian stable). Check redmine packages to see how easy can instalation be. Just few dependencies and that’s it. There is no point in compiling nginx if you can just use dependencies.

Most of the users are not so handy to install your software, make it more comfortable for them to use it and you will see the growth in the number of users. I can even imagine to put your fancy software to the offical repositories.