Thursday, February 28, 2013

What Would You Do If Your Laptop or Phone Were Stolen?


Have you ever thought about what you would do if your laptop or smart phone was stolen?  Have you thought about the data and information the thief would have access to?  Many people don’t think about the consequences of what would actually happen if someone did steal their laptop or their phone, until it happens.

Sure, you've got text messages and contact information for the people closest to you, but you also potentially have important data files, that may or may not belong to you.  With BYOD (Bring Your Own Device) becoming more and more popular, the files on your laptop or attached to your emails may very well be sensitive and private information.  If any of this information gets into the hands of the wrong person, imagine all of the possibilities of things that could take place.

I’ve got a better idea.  Let’s not spend time worrying about what if, and make sure that if this did happen, you will know exactly what to do and exactly how to handle the situation.  Your first instinct may be to call the police, but without any specific information or evidence, it will be hard for the authorities to take your theft report serious.

That’s where Prey comes into play.  Prey is a free, open source application that you install onto your laptop, MAC, Android, or iPhone.  Prey runs silently in the background, utilizing very little system resources; waiting for your commands.

This installed software is directly connected to Prey’s website, and the web interface allows you to monitor and send specific commands to your stolen device.

First, you need to visit their website, create an account, and then start installing the clients on your devices.  The free version will allow each user to monitor up to 3 devices!  Once you have logged into Prey’s website, and installed the clients on your devices, you are now prepared.

By utilizing the commands on the web interface, users are quickly able to perform many actions which may assist you with locating and even retrieving your stolen device.  Let’s take a look at what we can do.
When you realize that your device has been stolen, you will need to login to the web interface of the prey website.  Once logged in, you will need to mark your device as stolen.  Once the device is marked as stolen, your device will begin sending reports to the Prey website, provided the device has an internet connection.  If the device is 3G or 4G capable, the reports should start to come in almost immediately.  If the device can only connect to the internet via WIFI, then you will need to wait until the crook connects your device to an internet source.  Once connected, Prey starts doing its magic.

While logged into the web interface you can perform several on demand actions.  These actions include sending a custom message to the main screen of the device.  This message can and will let the crook know that the device is being monitored and you are onto him.  The Prey software will also utilize the front facing web camera, and take a picture.  If the crook is sitting in front of the device, this will prove as solid evidence for finding the thief.  Prey will also detect the location of the device using WIFI and GPS technologies.  Once Prey is able to collect the data, a report is sent from the device to the Prey website.  Here, you can print out the report and take it with you when you make a claim that your equipment was stolen. 

The information that can be presented to the authorities when you report the theft will be a key factor in helping them find the perpetrator.  Prey’s website offers several testimonies of real life scenarios and what they did to recover their stolen phone or laptop. 

My advice is to install this software on all of the devices that you own.  While the free account will only allow you to monitor up to three devices, they offer paid accounts which allow you to monitor every device in your house.

Not only will Prey assist you in locating a stolen device, if Prey is installed on your kids cell phones, you can use the Prey web interface to locate your child.  If a child is kidnapped and they have a device with Prey installed, the reports gathered from the utility may also be a big help when trying to locate your loved one.
I use Prey.  I have tested several functions and features of Prey and I have found it to be exactly as advertised.  This software provides great peace of mind, not just for the device, but for my family members that have a device with Prey installed on it.

Want to learn more about Prey?  Visit http://preyproject.com and create your free account today!

Friday, February 22, 2013

The 5 Keys To A Successful Server Migration


When assigned with the task of managing a new server implementation or multiple server migrations, there are many important things that you need to take into consideration.  I have been a part of flawless migrations and I have seen every aspect of a migration go completely wrong.  What determines the outcome of your migration is often non-technical.  As IT professionals, we all know the technology, we know the products and we know all of the processes necessary for leading a successful migration; but things still go wrong.
More often than not, the problems didn’t arise from the technological standpoints of the migration or implementation itself, it came from a lack of planning, a lack of preparation and a lack of focus.  There are a lot of demands in the IT field.  Users need to be able to work; and to be able to work they need their computers to work.  If the computer systems in your office don’t work, you are the first to hear about it, and generally, people aren’t nice.  The pressure placed on an IT team can at times be overwhelming and flustering, but you can’t let it get to you. 

In this article, I would like to share the 5 keys that I believe are most important to a successful implementation or migration project.

Key 1: Planning

You should always go into any server migration project with a thorough, well thought out plan.  This should include step by step processes and procedures for every phase of your project.  You should estimate and plan out the time of every step and make sure that the whole process is well documented.  Your IT team should review and discuss the plan, and all members of the team should be on the same page.  Remember, your goal is to have a successful migration with minimal downtime.  A plan will keep your team in check and on the right path.

Key 2: Backup

Part of your thorough plan should always include backups.  Everything that is in your network that could potentially be phased should be backed up.  Never, and I repeat, never go into a project or server migration without being 100% certain that you have a backup of all systems included in the migration.  In the event that something goes terribly wrong with your project; you’ll want to have the peace of mind that you have backups of all of your relevant systems.

Key 3: Failover

Failover kind of goes hand in hand with the backup, but to a higher degree.   You should always have a failover plan in place when working with server migrations.  We all know it should work, but it doesn’t always work perfectly the first time.  Make sure that if things start to go wrong with your migration that you can easily failover to other solutions, or quickly be able to undo any changes that have been made.  You may be approaching a deadline or other staff members may be close to coming into work, so be sure that you have yourself protected.  Always ask yourself, “What am I going to do if things do not work right after this migration?”

Key 4: Focus

Key 4 is a very important key in having a successful migration or implementation.  You will have deadline, demands, and maybe even a couple of speed bumps along the way.  Keep your cool, stay relaxed and keep focused.  Many issues are often caused by the small things, such as a missed step or rushed techniques.  If you developed a well written and well thought out plan for your project, you should be able to stick to that plan in a focused matter.  When the going gets tough, keep your head up and never lose focus.

Key 5: Documentation

Documenting your procedures and processes should be something that you are doing from the beginning of the planning phases all the way through completion of your project.  The more documentation you have on a particular project, server roll out, or network implementation, the easier it is going to be for you to manage not only the project, but also the infrastructure that you are building.  Make sure that any member of your team can assist with troubleshooting any future issues that may arise by having thorough documentation.  Well written documentation equals well managed networks and systems.
In my 15+ years of being a professional in this field, I have found that it is easy to make mistakes and even easier to lose focus.  Having a well-rounded team, with a well thought out plan will help you and your team to keep your eyes on the prize.  Do not take these projects lightly or they will come back to haunt you.  Remembering these 5 keys will give you better results at the end of any project or server migration. 

Wednesday, February 20, 2013

My Exchange 2013 Nightmare


It is a new year, and that means another product update from Microsoft.  This time around, it is Exchange.  Microsoft recently released Microsoft Exchange Server 2013.  It seems that they are keeping the Exchange rotation going on a 3 year span, but do we really need to upgrade our email server every three years?

Here is where you’ll get mixed reviews.  Some IT professionals may say, “If it’s not broke, don’t fix it”, while others may say, “We have to upgrade, Microsoft said so”.  While Microsoft may be pushing businesses to move up to the newly redesigned Exchange 2013 platform, my recommendation would be to hold off as long as possible, or until they at least release some sort of update for Exchange 2013.

In my first attempt to roll out a new Exchange 2013 environment, we had mixed results.  We were building a brand new domain on a new VMware server deployment.  We implemented one server for our domain controller. This was a virtual machine running Windows Server 2008r2 with only Active Directory installed.  We then created another Windows Server 2008r2 virtual server to host our Exchange installation.

After installing the many, and sometimes confusing prerequisites, we were well on our way to the actual installation of Exchange 2013.  We got the Exchange software installed, made the necessary DNS changes, opened the necessary ports on the firewall, and held our breath.  For about the first four hours, everything was great…. or was it?  Internal and external mail tests were running fine.  Users were receiving their mail on their smartphones, via webmail and also in their Outlook clients.  The implementation of Exchange 2013 was too easy.  Well, almost.

Within four hours of the Exchange server being online, one user came to me and stated, “I just received an email from an international client that was sent about 3 hours ago.  The email stated that I was supposed to be a part of a conference call at 4:00pm and it is now 4:30pm.  What happened?”

I started digging through log files, checking the event logs and even started sending some test messages.  The logs offered no insight as to what was taking place.  In fact, there was nothing in the logs that would have indicated any type of mail flow issues. 

Microsoft has ripped out the Exchange Management Console in Exchange 2013, leaving administrators either the Exchange Management Shell, or the new Exchange Management Web Interface for administering the Exchange deployment.  One of the few tools that they did leave us with is the mail queue viewer.  I opened up the viewer and sure enough, there were hundreds of emails sitting in the queues waiting for internal and external delivery.  I thought to myself, “How could this have happened?  Everything was just working great!” 

Flustered and confused, I reconfigured our DNS settings and firewall settings to route mail back to our Exchange 2007 server and promptly called Microsoft.  After discussing the issues with Microsoft, they stated that the issue was caused by the confusion of Exchange 2013 wanting to use IPv6 instead of IPv4. 
Our entire network is based off of IPv4 and we haven’t even discussed the possibilities of changing over to IPv6.  Microsoft acknowledged that they were aware of the problem and that it would be addressed in the first service pack or update package that came out for Exchange 2013.  When I inquired about the amount of time this may take Microsoft to release, they informed me that they did not have a set date, as they were still working on correcting other issues and adding other features.

So here I was, staring at one Exchange server screen seeing hundreds of incoming and outgoing emails stuck in queues that would never reach their final destination and hundreds of other emails that had made it to delivery and are now stuck in another server’s mailbox databases all while watching things run near perfectly on our Exchange 2007 server.

I decided to change the DNS and firewall settings to point back to the new Exchange 2013 server and  reboot the Exchange 2013 server to see if maybe that would clear up the queues, and sure enough, as soon as the server came back online from a reboot, all incoming and outgoing mail was delivered properly.  Why would a server reboot successfully clear the queues?  I was stumped.

For the last time, I changed all of the DNS and firewall settings, updated recipient policies to include all email domains and moved back to the Exchange 2007 server for our email needs.  I exported all contents of the mailboxes from the Exchange 2013 server and imported them into the user’s mailboxes on the Exchange 2007 server.  Everything was back in place, and the Exchange 2013 nightmare had finally come to an end.       
Since this deployment attempt, I have recreated physical and virtual servers to house Exchange 2013.  I have made DNS and firewall updates and changes so many times in the last month; I can do it in my sleep.  It did not matter what platform I installed Exchange 2013 on, the end result was exactly the same.  It would work for some time, and then all of the mail would get stuck in delivery queues.  I would reboot the server, and the mail would be delivered.  It just didn’t make any sense.

Unfortunately, there is no happy ending to this story, nor was there some unforeseen revelation that came upon me to be able to fix the Exchange 2013 deployment.  It simply did not work.  I have heard of other engineers having the exact same issues, and I have heard other engineers claim that they have deployed Exchange 2013 with no issues at all.  I say, if the shoe fits, wear it, unfortunately for me, well, I guess my feet were too big.

While the new release of Exchange 2013 is full of new features, several back end reconfigurations and the redesign of the EMC and OWA modules, the headaches of the initial implementation attempts have left a bad taste in my mouth for Exchange 2013.  I thought maybe we would deploy an Exchange 2010 server and wait for a service pack and do a simple upgrade to 2013, but, oh yeah, you currently can’t migrate from Exchange 2010 to Exchange 2013.  Microsoft intends to fix this in their release of the Exchange Cumulative Update (CU1), which is expected to be released sometime in the first quarter of 2013. 

I’m not holding my breath, and I am in no hurry to deploy Exchange 2013 at all.  Exchange 2007 has been as reliable as it comes, so I will stick to the side who claims, “If it’s not broke, don’t fix it”.  Maybe Microsoft should start to take this approach with future deployments of their existing software packages.

Tuesday, February 19, 2013

From Microsoft to Open Source


For my entire 15 year tenure as an Information Technology professional, I have always relied on and used almost nothing except for Microsoft products. Over the years I have held many Microsoft certifications, been to numerous Microsoft seminars, and have trusted in the products provided by Microsoft.

There seems to be a lot of validity in the phrase, "You get what you pay for", but what happens when you don't get what you paid for? I have always been fairly closed minded when it comes to implementing anything other than Microsoft technologies in the workplace. I mean, it's what I have studied, learned and implemented for over 15 years. Why would I explore any other options?

While I still do implement many Microsoft technologies in the workplace, there are several other places that I look for solutions to the needs of our business. You see, we are a live, on demand television and movie provider that streams hundreds of channels to your television, smart phone, tablet, or laptop computer all via the internet. Video on a bubble.

While the employees at our company have different roles and responsibilities, it often sends me out of the Microsoft realm and into the realm of open source solutions. I was always hesitant to implement anything that was considered open source into production. Again, don't you get what you pay for in this industry? If so, then why would I implement software and solutions that are completely free? At first, this seemed like a horrible idea, but then, the proverbial light bulb came on.

The developers of these open source applications are just like me. They found that if they take a beginning concept of some piece of software that is already in production, and collaborate with other industry professionals, that collectively they could develop something similar to that product. Not only have they developed products that are similar, they have developed products that are better and free.

I would like to share with you a few of the open source technologies that I have used, implemented, and become a big fan of. There are many options out there for different open source solutions. When planning out your next big IT project, keep your mind open, and see if any of these solutions can help you out.

CentOS (www.centos.org)
CentOS is a free operating system distribution based upon the Linux kernel. It is derived entirely from the Red Hat Linux distribution. In mid-2010, CentOS became the most popular Linux distribution for web servers, with approximately %30 of all Linux-based web servers using it. So, the next time you are looking to roll out a web server, look to CentOS!

Zenoss (www.zenoss.org)
Zenoss Core is an open-source application, server, and network management platform based on the Zope application server. Zenoss provides an attractive web interface that allows system admins to monitor servers and devices easily and efficiently.

Zenoss has the ability to monitor devices and servers via SSH, SNMP, WMI and basic ping operations to ensure availability. It also keeps an eye on disk space, event logs, and other customizable thresholds for your servers and devices within your network. Implementing Zenoss on a CentOS server will have you well on your way to monitoring all of your network devices in one easy to use interface.

ProxMox VE (www.proxmox.com)
ProxMox is an open-source virtualization platform based on KVM and OpenVZ. Proxmox Virtual Environment gives you near-bare-metal performance and leading scalability for your workloads. You can virtualize even the most demanding application workloads.

ProxMox supports linux and windows servers on both 32 and 64 bit platforms. It features support for the latest Intel and AMD server chipsets providing optimum virtual machine performance. With a feature rich, built-in management layer, ProxMox contains all of the capabilities required to create and manage a strong, dependable virtual infrastructure.

KeePass (www.keepass.info)
KeePass is an open-source password manager, which helps you to manage your passwords in a safe and secure manner. Featuring a secure, encrypted database backend, you have easy access to all of your network account passwords using one master password, so you only have to remember one single password or select the key file to unlock the whole database. The databases are encrypted using the best and most secure encryption algorithms currently known (AES and Twofish).

OpenStack (www.openstack.org)
OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.

VideoLan VLC Player (www.videolan.org)
VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVD, Audio CD, VCD, and various streaming protocols.

These are just a few open source applications that I have implemented into production that has saved me and our company a lot of time, money, resources, and headaches. The next time you are leading an IT project, remember that there are a lot of excellent open source applications available to you at no cost.

Do not make the same mistakes that I did for over 15 years. Don't be close minded, be open-sourced!