Tag Archives: Ubuntu

​Mark Shuttleworth dishes on where Canonical and Ubuntu Linux are going next
June 8, 2018 6:00 am|Comments (0)

Mark Shuttleworth looked good at OpenStack Summit in Vancouver. Not only were his company Canonical and operating system Ubuntu Linux doing well, but thanks to his microfasting diet, he’s lost 40 pounds. Energized and feeling good, he’s looking forward to taking Canonical to its initial public offering (IPO) in 2019 and making the company more powerful than ever.

It’s taken him longer than expected to IPO Canonical. Shuttleworth explained, “We will do the right thing at the right time. That’s not this year, though. There’s a process that you have to go through and that takes time. We know what we need to hit in terms of revenue and growth and we’re on track.”

In the meantime, besides his own wealth — according to the BBC, his personal wealth jumped by £340 million last year — he’s turned to private equity to help fuel Canonical’s growth.

And, where is that growth coming from? Well, it’s not the desktop. Found as users — and Shuttleworth himself — of the Linux desktop, Canonical’s real money comes in from the cloud.

Ubuntu remains the dominant cloud operating system. According to the May 8, 2018 Cloud Market statistics, on the Amazon Web Services (AWS) cloud, Ubuntu dominates the cloud with 209,000 instances, well ahead of its competitors Amazon Linux AMI, 88,500; Red Hat Enterprise Linux (RHEL) and CentOS‘s 31,400, and Windows Server‘s 29,200. As another data point, the executives at the OpenStack cloud company Rackspace told me that although their company had started with RHEL, today it’s 60/40 Ubuntu.

OpenStack has been very, very good for Canonical, which is more than you can say for many companies that tried to make it as OpenStack providers or distributors. “With OpenStack it’s important to deliver on the underlying promise of more cost-effective infrastructure,” Shuttleworth said. Sure, “You can love technology and you can have new projects and it can all be kumbaya and open source, but what really matters is computers, virtual machines, virtual disks, virtual networks. So we ruthlessly focus on delivering that and then also solving all the problems around that.”

So it is, Shuttleworth claims, that “Canonical can deliver an OpenStack platform to an enterprise in two weeks with everything in place.”

What’s driving Canonical growth on both the public and OpenStack-based cloud is “machine learning and container operations. The economics of automating the data center brings people to Ubuntu.”

That said, “The Internet of Things (IoT) is still an area of investment for us. We have the right set of primitives [Ubuntu Core, Ubuntu for IoT and Snap contanizeried applications] to bring IoT all over the planet.” But, it’s “not profitable yet”.

Shuttleworth thinks Ubuntu will end up leading IoT, as it has the cloud, “because a developer can transfer their programs from a workstation to the cloud to a gateway to the IoT. I want to make sure we build the right set of technologies so you can operate a billion things with Ubuntu on it.” To make this happen, Shuttleworth said Canonical currently has just short of 600 full-time developers.

As for the desktop, Shuttleworth finds it a “fascinating study of human nature that Unity [Ubuntu’s former desktop] became a complete exercise in torches and pitchforks. I’m now convinced a lot of the people who demanded its demise never used it.” That’s because, while “I think GNOME is a nicely done desktop,” many Ubuntu users are now objecting to GNOME. Shuttleworth also had kind words about the KDE Neon, MATE, and LXDE desktops. Still, “I do miss Unity, but I use GNOME.”

Shuttleworth would like to see the open-source community become “safer to put new ideas out into it.” Too often, “it’s obnoxious to someone else’s labor of love.”

That said, in business competition, Shuttleworth said, after people criticized him for calling out Red Hat and VMware by name in his OpenStack keynote speech, “I don’t think it was offsides to talk about money and competition. OpenStack has to be in the room where public clouds are discussed and Ubuntu has to be in the conversation when it comes to cloud operating systems. No one has questioned the facts.”

In a way, though, having given up on innovating on the desktop and on the smartphone market has been a blessing. “I can work with more focus on cloud and the edge and IoT. We’re moving faster. Our security and performance story can be tighter because we can put more time on both them.”

One thing that Shuttleworth believes Canonical does better than his competition is delivering the best from upstream to its customers. “Take OpenStack, we didn’t invent a bunch of pieces. We take care of stuff people need by trusting the upstream community. People find this refreshing.”

Canonical also succeeds, he thinks, because they eat their own dog food. “We learn stuff by operating it ourselves and not just developing it. We experience what it’s like to operate many OpenStack and Kubernetes stacks. We then offer these complex solutions as a managed service, and that reduces the cost for users.”

The result is a company that Shuttleworth is sure will lead the way in the cloud and container-driven world of IT.

Related Stories:

Tech

Posted in: Cloud Computing|Tags: , , , , , , ,
Canonical Announces the Availability of Ubuntu Core for Samsung ARTIK 5 and 10
May 7, 2016 11:05 am|Comments (0)

Thibaut Rouffineau, an IoT & Ubuntu Core evangelist, has announced the availability of Canonical’s Ubuntu Core operating system for Samsung ARTIK 5 and 10 IoT (Internet of Things) platforms.
Those of you who have been waiting to get their hands on the Ubuntu Core developer images for the Samsung ARTIK 5 and Samsung ARTIK 10 boards should know that they are available for download for free from the https://developer.ubuntu.com/en/snappy/start/samsung-artik-iot-modules/ website.
These Ubuntu Core image will give developers access to a number of technologies of the two Samsung ARTIK IoT boards, including but not limited to Wi-Fi and Bluetooth, and they can also be used as a starting platform to build their next Internet of Things applications and devices.

Source: http://news.softpedia.com/news/canonical-announces-the-availability-of-ubuntu-core-for-samsung-artik-5-and-10-503744.shtml
Submitted by: Arnfried Walbrecht


All articles

Posted in: Web Hosting News|Tags: , , , , , ,
How To Create a High Availability HAProxy Setup with Corosync, Pacemaker, and Floating IPs on Ubuntu 14.04
March 7, 2016 7:10 pm|Comments (0)

A high availability architecture is one of the key requirements for any Enterprise Deployment network. In this tutorial we will cover the build of a two-node high-availability cluster using the Corosync cluster engine, and the Pacemaker resource manager on Ubuntu 14.04 with a Floating IP to create a high availability (HA) server infrastructure on our Cloud Enviroment.

The Corosync Cluster Engine is an open source project derived from the OpenAIS project licensed under the new BSD License. It allows any number of servers to be part of the cluster using any number of fault-tolerant configurations (active/passive, active/active, N+1, etc.). Corosync effort is to develop, release, and support a community-defined, open source cluster executive for use by multiple open source and commercial cluster projects or products that provides messaging between servers within the same cluster.

Pacemaker is an open source high availability resource manager software used on computer clusters that manages the resources and applications on a node within the cluster. It implements several APIs for controlling resources, but its preferred API for this purpose is the Open Cluster Framework resource agent API.

How it works

We are going to setup the High Availability Cluster that will consist of two Ubuntu 14.04 servers accomplished with a Floating IP in active/passive configurations. The users will access the web services from the primary node unless any failure is detected by the pacemaker. In a situation when the Primary node fails the secondary node will become active using the script that will reassign the floating IP to the secondary node to serve the incoming traffic.

Prerequisites

In order to complete this article we need to build two nodes with Ubuntu 14.04 Operating System installed and setup their unique FQDN. Then we need a floating IP address to be assigned with any one node that will be used for the fail over.

We will follow the steps in the following sequence to setup the fully functional HAProxy on Ubuntu 14.04.

1 – Create 2 Ubuntu Nodes
2 – Create Floating IP to assign to one node.
3 – Install and configure Corosync
4 – Install and configure Pacemaker
5 – Configuration of Corosync Cluster
6 – Pacemaker Configuration
7 – Configure Nginx Cluster Resource

Step 1: Creating 2 Ubuntu Nodes

The first step is to create 2 separate Ubuntu nodes and configure their FQDN’s that represents them. Here we will be using ‘Ubuntu-14-P’ for our primary node and ‘Ubuntu-14-S’ for the secondary node. Make sure that on each node Private Networking option should be enabled. Now login to both your servers using the sudo user and run the following commands on both servers as a root to update your servers, install the Nginx web server and then configure the default web page with some test contents that represents the name of the current node.

#apt-get -y update #apt-get -y install nginx

1

Now replace the contents of ‘index.html’ with your primary and secondry hostname and IP address which will be useful for testing which Node the Floating IP is pointing to at any given moment. You can do so by adding your host information using your editor.

#vim /usr/share/nginx/html/index.html

1

Save and close the configuration file and update the secondary node with its host name and IP address.

Step 2: Configure Floating IP

A Floating IP is an IP address that can be instantly moved from one Node to another Node in the same datacenter. So as the Part of a highly available infrastructure, we need to immediately point an IP address to a redundant server. You can generate your new floating IP from the cloud console and assign it the the Primary node.

Once you have assigned the floating IP to your Primary server, open your web browser followed by the Floating IP address. You will see the same contents by pointing the original IP address or the floating IP address.

http://your_floating_ip

You will see the following page after pointing the IP address of your server in the web browser.

1

Setup Time Synchronization

Time synchronization is important when you are working on setting up the clusters. In clusters each node have to communicate with the other node. Let’s run the following command on both the server nodes .

# dpkg-reconfigure tzdata

Select the same time on both servers and then update your servers followed by the installation of ‘ntp’ package by using the below commands.

1

After time zone setup run the command to update your system once again.

# apt-get update # apt-get -y install ntp

1

Step 3: Installing Corosync and Pacemaker

Now we are going to install the Corosync and Pacemaker packages on both servers using the following command.

# apt-get install pacemaker

Corosync will be installed as a dependency of the pacemaker. Press ‘y’ key to continue the installation process.

1

Step 4: Corosync Configuration

After the installation is complete with required packages of corosync and pacemaker on both the nodes, we are going to configure the Corosync, So that both the servers can communicate as a cluster. So, in oder to allow nodes to join a cluster, Corosync requires that each node possesses an identical cluster authorization key.

Let’s run the following command on the Primary node to install the ‘haged’ package so that we can easily increase the amount of entropy on our server required by the ‘corosync-keygen’ script

primary_node# apt-get install haveged

Then run the below command to generate a 128-byte cluster authorization key as shown.

1

Now copy the generated ‘authkey’ across to the secondary node using the ‘scp’ command as shown below.

primary_node# scp /etc/corosync/authkey user@secondry_node:/tmp

1

Then come the secondary node and place the ‘authkey’ file in a proper location with right permission using the following commands.

secondry_node# mv /tmp/authkey /etc/corosync/ secondry_node# chown root: /etc/corosync/authkey secondry_node# chmod 400 /etc/corosync/authkey

Step 5: Configuration of Corosync Cluster

To get our desired cluster up and running, we must set up these configurations by opening the ‘corosync.conf’ with following parameters on both servers. So the configurations should be same on both servers.

# vim /etc/corosync/corosync.conf  totem {         version: 2          # How long before declaring a token lost (ms)         token: 3000          # How many token retransmits before forming a new configuration         token_retransmits_before_loss_const: 10          # How long to wait for join messages in the membership protocol (ms)         join: 60          # How long to wait for consensus to be achieved before starting a new round of membership configuration (ms)         consensus: 3600          # Turn off the virtual synchrony filter         vsftype: none          # Number of messages that may be sent by one processor on receipt of the token         max_messages: 20          # Limit generated nodeids to 31-bits (positive signed integers)         clear_node_high_bit: yes          # Disable encryption         secauth: off          # How many threads to use for encryption/decryption         threads: 0          # Optionally assign a fixed node id (integer)         # nodeid: 1234          # This specifies the mode of redundant ring, which may be none, active, or passive.         rrp_mode: none 		interface {                 # The following values need to be set based on your environment                  ringnumber: 0                 bindnetaddr: primary_servers_ip                 mcastaddr: 226.94.1.1                 mcastport: 5405         } }  amf {         mode: disabled }  quorum {         # Quorum for the Pacemaker Cluster Resource Manager         provider: corosync_votequorum         expected_votes: 1 }  aisexec {         user:   root         group:  root }  nodelist {   node {     ring0_addr: primary_servers_ip     name: ubuntu-14-p     nodeid: 1   }   node {     ring0_addr: secondry_servers_ip     name: ubuntu-14-s 	nodeid: 2   } } 	 

1

Save the file and then we need to configure Corosync to allow the Pacemaker service on both servers by creating a new ‘pcmk’file with following code that will be included in the Corosync configuration, and allows Pacemaker to use Corosync to communicate with our servers.

# vi /etc/corosync/service.d/pcmk  service {   name: pacemaker   ver: 1 } 

Open the corosync file to enable its service at boot and then run the command to start its service on both the servers after saving the configuration file.

# vi /etc/default/corosync  # start corosync at boot [yes|no] START=yes  # service corosync start

1

Step 6: Pacemaker Configuration and Start-up

As we have done with the corosync, now we move forward to configure Pacemaker services and enable its starup priority level.

Run the following command to enable pacemaker startup with priority 20 because it must start after the corosync and the default priority of corosync is 19. So, here we will define its priority to 20 and then start its service as shown below.

# update-rc.d pacemaker defaults 20 01 # service pacemaker start

1

Now check the status of Pacemaker using the CRM utility. Simply run the following command and it will show you the online state of both your nodes.

# crm status

1

Step 7: Adding NGINX Resource

We have successfully configured both nodes. Now we are going to add the Nginx resource as pacemaker comes with its default Nginx resource agent. So, we are going to make Nginx service highly available using the floating IP that we have configured. Let’s run the following command to create a new primitive cluster resource called “Nginx” as shown below.

# crm configure primitive Nginx ocf:heartbeat:nginx    params httpd="/usr/sbin/nginx"    op start timeout="40s" interval="0"    op monitor timeout="30s" interval="10s" on-fail="restart"    op stop timeout="60s" interval="0"

This will monitor the Nginx every 10 seconds, and restart it if it becomes unavailable. Then create a clone resource that specifies existing primitive resource to be started on multiple nodes by running the command as shown below.

# crm configure clone Nginx-clone Nginx

To create a colocation restraint called “FloatIP-Nginx” by using the following command.

# crm configure colocation FloatIP-Nginx inf: FloatIP Nginx-clone

This will create the ‘colocation’ resource and both of your servers will have Nginx running, while only one of them resource will be running with Floating IP. Now whenever Nginx services will stop on one server it will be migrated to the secondry node.

Let’s run the following command on the secondary node to check the crm status.

# crm configure show # crm status

1

Conclusion

We are successful in setting up the basic High Availability setup using the Corosync, Pacemaker and a Floating IP address with two Ubuntu Nodes as a Primary and Secondary. Using the same scenario you can configure the HAProxy load balancers that will be configured to split traffic between two backend application servers. If the primary load balancer goes down, the Floating IP will be moved to the second load balancer automatically, allowing service to resume. You can use the same configuration method for any other application to setup high-availability.


RSS-4

Posted in: Web Hosting News|Tags: , , , , , , , , ,
Ubuntu Make Now Lets Users Install the Unity 3D Editor in Ubuntu Linux
November 17, 2015 10:45 pm|Comments (0)

Ubuntu Make Now Lets Users Install the Unity 3D Editor in Ubuntu Linux
Didier Roche, the creator of the Ubuntu Make command-line utility that lets users of the Ubuntu Linux operating system install various useful third-party projects, has announced the release of a new maintenance version. Ubuntu Make 15.09 is now …
Read more on Softpedia News

ownCloud Announces Ubuntu-Based Appliance with ownCloud Proxy
Being based on the long-term supported Ubuntu 14.04 (Trusty Tahr) operating system, the ownCloud Appliance comes fully pre-configured and includes the ownCloud Proxy app, which was introduced during the ownCloud Contributor Conference event that …
Read more on Softpedia News


RSS-4

Posted in: Web Hosting News|Tags: , , , , , ,