Author Archive


Steve, this is brilliant, almost perfect timing and as usual, a well planned and designed exit strategy… Kudos…
The die-pod, based on the iPod touch platform, is a time capsule in which the deceased own collection of music, videos, books and applications are being imprinted on a die and formatted into a ROM which cannot be erased. The new product will allow you to carry the memories of your loved ones with you wherever you go.
An optional feature is I setting a tiny capsule with the deceased ashes into the die-pod, using conversion process from ashes into a tiny diamond.
Price list:
299$ for the why-die version
549$ for the 5th dimension communication version
150$ extra charge for deceased remains time capsules inclusion , using fedex two way shipping
50$ mail in rebate for dogs and cats

No Jailbreaks and unlocks recorded to date.

Steve, may jobs be with you, farewell


http://www.xeround.com/blog/2011/07/virtualization-vs-as-a-service-approach

It is difficult not to notice the two different directions in cloud evolution; two major approaches are evolving, two different ways to look at how application infrastructure should be built.

Virtualization centric solutions – in this approach, Cloud main function is to provide different provisioning and management services around servers virtualization.  Server images are key component in this approach. These images are deployed on virtual servers. This is the atomic deployment and management unit for the cloud. Applications provisioning is usually a set of images deployed on a number of servers. The virtualization infrastructure provides many of the enterprise level features such as high availability, replication of data, Storage allocation and management, and even in some cases scalability for the application. This approach evolves from the bottom up, starting by solving infrastructure inefficiency, server utilization issues, resource management etc’.

Service centric approach – The cloud is a bunch of high level services , for customers to choose from when developing and deploying applications. Data services, Platform services, Management services etc’. in this approach, different vendors offer a comprehensive framework for the application which includes everything needed for the application to run, all this usually wrapped in a friendly user interface, “shields” the customer  from all the underlying infrastructure (want more storage – move this slider, want more scalability for the application – slide the cluster size control higher). Great examples for this approach are Heroku, PHPFog etc’.

Read the full post here:

http://www.xeround.com/blog/2011/07/virtualization-vs-as-a-service-approach

 


Migrating from Yahoo Mail to Gmail – simple guide

Previously I’ve been working with Yahoo mail as my main mail application, but recently I decided to move all my mail activity to gmail, from few reasons –

Google services integrity – gmail is well integrated with google docs, and other tools I use frequently.

Yahoo account was giving me hard time when configuring it on the Samsung Android phone – I had to use the Yahoo mail application, and separately configure my company’s exchange account on the standard mail application on android – having two separate applications for mail is a pain… this specific problem can be solved using some work around, yet there are other reasons to continue with this migration (this issue does not exist on Iphones… they have one mail application that can present Yahoo and other mail accounts all in one app’)

Yahoo mail search is not working very well for me, frequently I cannot find the mail I’m looking for

Latest Yahoo mail application became extremely slow and heavy, this could be a long distance issue, as I’m not located in the US and probably Yahoo API’s having to do all the “round the globe” path, took their time

Another major issue I had with Yahoo account was lock-in. Yahoo charges 20$ a year for Pop enabled accounts, and despite the fact that it’s not much, it looks like an unnecessary cost for a service no better than other free mail services, and it might change as well. With no POP access, I am not able to retrieve all my mail history from this acoount.

As for the migration operation (yes, this is not a short easy process)…

Big question – how to do it right? After all, I’m still going to get mail on the Yahoo account from many sources where I registered to or have my Yahoo mail address, so the migration is not a singular point in time… it’s a continuous process. I want to be able to use my gmail account, and get all my Yahoo mail in that same gmail account.

The process is split into 2 steps:

  1. Configure Yahoo mail account to enable “POP3” (standard email protocol)
  2. Configure gmail account to “POP” Yahoo’s account and get all the email

Step 1 – enable POP for yahoo mail – Click in the little icon/image next to  the name at the top left of the scree, then choose Account

You’ll be asked to re-login, then the Account Info appears:

In the “Account Settings” section choose “Set language, site, and time zone”

In the new screen, select “Yahoo Asia” from the select box, as shown below, save and close.

Going back to the main Yahoo mail page choose “Options” ->” Mail Options”

Click on “POP and Forwarding” and select the “Allow your Yahoo! Mail to be POPed” option.

Step 2 – Gmail configuration

Now going to the gmail account, go to “Options” -> “Mail Settings”

Select the “Accounts and Import” tab, in the “Check mail using POP3:” section, click “Add POP3 email account” button.

Enter your full Yahoo mail address (inc’ @yahoo.com) and next button

Enter your username, password and leave the POP server details as is.

I chose to “Leave a copy of retrieved messages in the server, because I want it stay as a backup. I also checked the Labeling option.

Click “Add Account”.

Now, if all is correct, the migration process will begin, and messages from the Yahoo account start to appear in the Gmail inbox.

Depending on the number of emails and the inbox size on Yahoo, the whole email transfer process can take a very short time for small amount of data but could take days for larger Yahoos inboxes.

All Yahoo mail will show up in Gmail as unread mail, so I used this procedure to mark it as read:

http://www.woodwardweb.com/technology/marking_all_mai.html (Thanks Martin).

Once all the old email is retrieved, every new mail sent to the yahoo account will be transferred to the Gmail account as well.

Done!


As hyped as the cloud is today, it may become even hyper, with the introduction of two (relatively) new technologies:  10Gb network connectivity and Solid State Drives, known as SSD.

Hardware has immensely improved in the last few years. A key development in the field is more CPUs/cores per server, providing vast processing power to today’s standard servers. This is amplified with the latest Nehalem architecture from Intel, which enables a much higher level of parallelism due to its new cache memory re-architecture and more cores in a CPU. Standard I/O capabilities have also greatly improved, particularly with high bandwidth and fairly low cost solutions like SATA. In addition, hard drives have also grown significantly in capacity and can now provide terabytes of storage on a single drive. Lastly, energy efficiency has also improved considerably, with lower power consumption across the board.

These key improvements in hardware, which enable users to get even more from their infrastructure, have also found their way into the cloud – where they are used effectively to power this environment.

At the current state, cloud infrastructure has two main bottlenecks that significantly impair cloud services, especially distributed services:

1. Network Bandwidth:

A 1Gb network is commonly used in most interfaces today. Even if multiple network interface devices are used on a single server, their combined bandwidth is still far lower than desired. The relatively low bandwidth in and out of the server prevents the optimal utilization of the server’s capacity. After all, most applications are I/O intensive and not computation intensive.   For example, in a 4 cores Intel CPU the throughput can easily reach 40Gb/sec, yet this power cannot be demonstrated on top of 1Gb networks. (I highly recommend reading this paper from Intel, on a software-switch prototype demonstrating how fast CPUs are today.)

To unleash the power of the server, clouds must provide multiple 10Gb connectivity as a standard. Cloud applications by nature require extensive network use. Once the STANDARD shifts from 1Gb to 10Gb network interface, the cloud as a whole will enjoy better connectivity and will be better equipped to deliver on its major promises.

The following illustration shows how CPU and memory are utilized depending on network connectivity. To clarify, this assumes I/O is done through the network, such as when using Amazon’s EBS for example.

Poor utilization of CPU and memory with a 1Gb network connectivity

The illustration shows a 1Gb network connectivity fully utilized for networking and I/O, while CPU and memory bandwidth remain under-utilized. This results in unbalanced server utilization, wherethe CPU and memory are under-utilized and the networking and I/O are over-utilized.

Maximum utilization of CPU and memory bandwidth with 10Gb network connectivityA 10Gb network connectivity fully utilized for networking and I/O, brings CPU and memory bandwidth to 100% utilization – A well-balanced server can easily utilize the full potential of all of its components.

2. Storage

Another major pain in the cloud is large-scale persistent storage and the way it deals with random access patterns, or in short – hard drives…

Those old beasts are not suitable to serve one of the most common data use-cases – a database.

Despite the improvements in hardware, hard-drives for the most part still exhibit their inherent limitations of seek time and read/write serialization, resulting in poor support for random access. Hard drives have zero level of parallelism, and they are highly limited by their physical mechanical structure, which forces a delay of ~10ms on every move of their heads across the magnetic plates. It is less of a limitation when serving use cases like video streaming (as those make good use of hard drive high capacity and high throughput in sequential read patterns), but prove inefficient for heavy read/write operations that are often required by databases.

SSD is an evolving technology that resolves the random access pattern issues, while still providing large-scale storage capacity and long term data persistency. SSD is built on “electronics only”, with no mechanical parts that cause delays. SSD is a perfect solution for databases, despite the fact that today it is much more expensive than standard hard drives on a per GB price (price ratio today is about 1 to 5). However,  when considering the overall cost and value of this technology, and its effect on server utilization and performance, this price may not be as high. Another significant advantage is the fact that there is no need to make any changes in software to benefit from SSD technology, as it uses the same interfaces as standard hard drives, namely, SATA interface.

The effective throughput of SSD for random data access can reach 30-50MB/sec. Contrast that with a standard hard drive’s effective throughput of 100K/sec (when trying to read 1K records from random locations on the drive).  While SSD keeps improving, hard drives have made very little improvement in the last 20 years.

The following illustration shows how CPU and memory are utilized when standard hard drives are used, compared to SSD.

CPU and memory utilization with regular hard drivesTen hard drives at full throttle (random access) barely scratch the performance of a server.

Solid State Drives better utilize the CPU and Memory of the serverTen SSDs bring server utilization to a very reasonable area, leaving space for additional tasks such as web apps to run effectively on the same server.

When implementing these two great improvements, the cloud can then be truly exploited in an effective and efficient manner. Their impact on key critical components in the software funnel – such as distributed databases – is immense.

Bottom line – 10Gb network and SSDs are an absolute necessity in an advanced cloud environment, leading to improved resource utilization and further adoption of the cloud.

10Gb networks and SSDs are hopefully coming soon to a cloud service provider near you!


Now MySQL with extra cool feature!

Xeround Database now has the Auto scaling feature which means even less baby-sitting for MySQL databases… Let Xeround Database management system scale the databases as needed, based on Load, Size and number of concurrent connections.

http://blog.xeround.com/2011/04/industry-first-auto-scaling-database-in-the-cloud


This blog post takes you through the database essentials, from how to ensure your database is highly available and performs optimally without getting buried in development, to manual configuration and avoiding IT hassles.

http://blog.xeround.com/2011/04/approaches-database-performance-and-throughput-in-the-cloud


My latest blog post provides a helpful guide for migrating your application to the cloud, including how to choose your CMS, your cloud hosting infrastructure, and your cloud data service.

http://blog.xeround.com/2011/03/enterprises-in-public-cloud-environment