Hosting your own Gem repository

July 26, 2011 Leave a comment

I am working on building an all-in-one infrastructure server for our AWS-EC2 VPC cloud environment.

All-in-One = DNS, Syslog, SMTP, Linux repo, Gem repo, and Puppet Server

The one I have not setup before is the Gem Repo.  I found some posts online but none seemed to work.  This is the simplest way possible to host your own Gem repo:

On your gem host run the following command

  1. Install rubygems (I am using SuSE so): zypper install rubygems
  2. Install some gems : gem install puppet
  3. Start the gem server: gem server &     (run it in the background)

On the client run the following

  1. /usr/bin/gem sources -r http://rubygems.org/ (default external resource)
  2. /usr/bin/gem sources -a http://yourserver:8808  (open firewalls if you are on different VLANs)

On the client run gem install puppet

Categories: Cloud Automation

SANkiller

May 8, 2011 1 comment

Most companies today have a SAN/NAS environment and all data classes live on it….CIFS, NFS, SQL Databases, Backups, Application DATA, Virtual Machines and much much more. And most SAN/NAS hardware is marked up way beyond retail..we end up paying through the nose without even thinking about it. The problem is we have been trained to think that the FC SAN is the only storage option that provides redundancy and performance.

Before SAN/NAS we had no way of centralizing our storage. In other words we could not write to just one filesystem or one pool of storage. There were DAS setups and NFS was the only way of sharing our DAS storage to more than one server. This was clumsy and performance was not scalable. SAN/NAS gave us a central high performance array that could be shared to all hosts. The traditional SAN travelled over fiber, and NAS over 100MiB/1GiB ethernet. So the birth of SAN/NAS attached storage started a revolution for years. Every company I have worked for or interviewed with in the past has had a SAN/NAS environment. We love the speed, we love the redundancy, and we like having it all in one spot. First it was only expensive FC, now we can use cheaper SATA disks and cut the price by more than half. But aren’t we still overpaying? We need to start thinking about our DATA and were it belongs. Let’s list what a traditional FC SAN gives us:

  • Using two SAN switches and two fiber cards per host you will get FCP redundancy – A+
  • Using FC disks and a Fiber network you get lossless data packets and a low latency network – A+
  • Using an Array to power your disk throughput is powerful – A+
  • Enterprise/Midrang Array’s have hardware redundancy – A+
  • Powerful applications that can provide mirroring and DR recovery – A+

These are all great things to have and we pay a lot of $$$ to have it. But what if I said with emerging technologies like AoE, 10GiB, 3rd party replication software, and OpenSourceDistributed/Clustered Filesystems you can create a storage environment providing all of the power of a FC SAN/NAS array environment for a lot less $$$.

The problem with expensive FC SANs is that data is growing at a faster rate than the price drop of expensive SAN hardware and maintenance support. We need to re-architect our storage infrastructure and create tiers of storage utilizing high latency, normal latency, and low latency storage on cheaper hardware. FC disk may go away soon, but only to be replaced with SSD technology and there is a place for it. Many SQL databases still need low latency high performance disks, so Tier 0 will always be needed. Looking beyond SQL databases there are petabytes of stagnant data out there sitting on expensive FC disk drives, doing nothing. I know because the last two companies I have worked for have exactly that, paying for data that we don’t use often or at all.

Categories: Cloud Automation