Sun SPARC

Basic Puppet Installation at Solaris 11.2

Review and Install Puppet at Solaris 11.2 it is useful in practice to understand how Puppet work at Solaris to manage multiple system and distributed configuration from Central System

Our Example to install Puppet at Global Zone and 2 Non-Global Zone and test /etc/hosts between Puppet Server master with Puppet Client agent, this Tutorial it assume you have Global Zone ‘master’ and 2 Non-Global Zone ‘agent1, agent2’

Preparation

I’m already prepare /etc/hosts file at our puppet server ‘master’ and clients ‘agent1, agent2’ The /etc/hosts of all zones (either global and Non-global zone) is now like this below

root@agent1:~# cat /etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost loghost
192.168.1.100    master.puppet.local.net master kusanagi
192.168.1.101   agent1.puppet.local.net agent1
192.168.1.102   agent2.puppet.local.net agent2

And all host name for all OS

root@master:~# hostname master.puppet.local.net
root@agent1:~# hostname agent1.puppet.local.net
root@agent2:~# hostname agent2.puppet.local.net

Installing and Configuring Puppet

To Start Puppet installation it is quite easy just like to install Regular Solaris pkg from Repository ‘pkg install puppet’ and this should be apply at all OS we have it to complete our practice then prepare Puppet server and enable from Solaris SMF to enable

 

root@master:~# pkg install puppet
root@master:/etc/puppet/ssl# svccfg -s puppet:master setprop config/server = master.puppet.local.net root@master:/etc/puppet/ssl# svccfg -s puppet:master refresh root@master:/etc/puppet/ssl# svcadm enable puppet:master
root@master:/etc/puppet/manifests# svcs puppet
STATE          STIME    FMRI
disabled       Mai_19   svc:/application/puppet:agent
online         Mai_19   svc:/application/puppet:master

Prepare Agent to have testing connectivity between our Agent and Master

root@agent1:# puppet agent --test --server master.puppet.local.net
Info: Creating a new SSL key for agent1.puppet.local.net
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent1.puppet.local.net
Info: Certificate Request fingerprint (SHA256): 14:20:1E:C8:D8:78:1D:DF:9C:92:75:F2:72:C6:61:61:AC:56:82:06:FC:A4:6D:5E:DA:5F:7E:12:80:5B:90:A9
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled
root@agent1:~#

The Connection between the agent and the master is protect by SSL, at the first execution the agent creates a key and certificate request, and We have to repeat this command in another client

root@agent2:~# puppet agent --test --server master.puppet.local.net
Info: Creating a new SSL key for agent2.puppet.local.net
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent2.puppet.local.net
Info: Certificate Request fingerprint (SHA256): 76:A7:23:A7:4D:41:66:DD:71:B0:4E:AA:62:EC:0B:DB:61:59:BE:56:43:15:E7:BA:C3:CB:AF:D3:98:D3:30:18
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled
root@agent2:~#

From Puppet Server ‘master’ we can check all certificate generated for each agent by command

root@master:/etc/puppet/ssl# puppet cert list
  "agent1.puppet.local.net" (SHA256) 28:20:1E:C8:D8:78:1D:DF:6G:92:75:F2:72:C6:61:61:AC:56:82:06:FC:A4:6D:5E:DA:5F:7E:12:80:5B:90:A9
  "agent2.puppet.local.net" (SHA256) 71:A7:23:A7:4D:41:66:SS:71:B0:4E:AA:62:EC:0B:DB:61:59:BE:56:43:15:E7:BA:C3:CB:AF:D3:98:D3:30:18

Now we have two request to sign, Lets sign them

root@master:/etc/puppet/ssl# puppet cert sign agent1.puppet.local.net
Notice: Signed certificate request for agent1.puppet.local.net
Notice: Removing file Puppet::SSL::CertificateRequest agent1.puppet.local.net at '/etc/puppet/ssl/ca/requests/
root@master:/etc/puppet/ssl# puppet cert sign agent2.puppet.local.net
Notice: Signed certificate request for agent2.puppet.local.net
Notice: Removing file Puppet::SSL::CertificateRequest agent2.puppet.local.net at '/etc/puppet/ssl/ca/requests/agent2.puppet.local.net.pem'

We can check each agent connectivity deferent output

Agent1 

root@agent1:~# puppet agent --test --server=master.puppet.local.net
Info: Caching certificate for agent1.puppet.local.net
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for agent1.puppet.local.net
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
[...]
Info: Caching catalog for agent1.puppet.local.net
Info: Applying configuration version '1400520716'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.09 seconds
root@agent1:~# svccfg -s puppet:agent setprop config/server = master.puppet.local.net
root@agent1:~#  svccfg -s puppet:agent refresh
root@agent1:~# svcadm enable puppet:agent

Agent2

root@agent2:~# puppet agent --test --server master.puppet.local.net
Info: Caching certificate for agent2.puppet.local.net
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for agent2.puppet.Local.net
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
[...]
Info: Caching catalog for agent2.puppet.local.net
Info: Applying configuration version '1400520716'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.09 seconds
root@agent2:~# svccfg -s puppet:agent setprop config/server = master.puppet.local.net
root@agent2:~# svccfg -s puppet:agent refresh
root@agent2:~# svcadm enable puppet:agent

Connectivity Tested and Done between Agents and master, will do some easy and more important in System admin to configure /etc/hosts to create Puppet modeule called etchosts which delivers an /etc/hosts file and the file is at /etc/puppet/modules/etchosts/files, which can be accessed Puppet under the URL puppet://modules/etchosts/hosts.

I will go through each steps at Puppet master and Puppet agents to show you how we can configure it

root@master:~# mkdir /etc/puppet/modules/etchosts
root@master:~# mkdir /etc/puppet/modules/etchosts/files
root@master:~# mkdir /etc/puppet/modules/etchosts/manifests
root@master:~# cp /etc/hosts /etc/puppet/modules/etchosts/files/hosts

Now we will give some life to the module /etc/hosts

root@master:~# cat << EOT > /etc/puppet/modules/etchosts/manifests/init.pp
> class etchosts {
>         file { "/etc/hosts":
>                 source => 'puppet:///modules/etchosts/hosts',
>         }
> }
> EOT

Afterwards we creating a file just including the nodes definition file:

root@master:~# cat << EOT > /etc/puppet/manifests/sites.pp
> import 'nodes.pp'
> EOT

At last we define the behaviour for the “default” node:

root@master:~# cat << EOT > /etc/puppet/manifests/nodes.pp
import 'etchosts'

node 'default' {
 include etchosts
}

I’m just add a line to the /etc/puppet/modules/etchosts/files/hosts file. 

echo "# Add Line to puppet" >> /etc/puppet/modules/etchosts/files/hosts

Okay, now log into one of the zones with an agent and check the current /etc/host

hosam@agent2:~$ cat /etc/hosts
[...]
192.168.1.102   agent2.puppet.local.net agent2
hosam@agent2:~$

At the moment there there isn’t the additional line in the file. You can now wait for 1800 seconds at maximum (because the agent checks every 1800 seconds if there is something to do by default) or you can force the check. Let’s force it.

root@agent2:~# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for agent2.puppet.local.net
Info: Applying configuration version '1400529373'
Notice: /Stage[main]/Etchosts/File[/etc/hosts]/content:
--- /etc/hosts  Tue May 20 08:10:32 2014
+++ /tmp/puppet-file20140520-8490-tfgwja        Tue May 20 08:10:02 2014
@@ -11,2 +11,3 @@
 192.168.1.102  agent2.puppet.local.net agent2
+# Add Line to puppet

Info: /Stage[main]/Etchosts/File[/etc/hosts]: Filebucketed /etc/hosts to puppet with sum 38f6c964aab77edb2ff938094f13e2d0
Notice: /Stage[main]/Etchosts/File[/etc/hosts]/content: content changed '{md5}38f6c964aab77edb2ff938094f13e2d0' to '{md5}49b07e8c62ed409a01216bf9a35ae7ae'
Notice: Finished catalog run in 0.60 seconds

Now let’s check the file again … et voila … you find the line you have added into the /etc/puppet/modules/etchosts/files/hosts in the /etc/hosts of you file.

root@agent2:~# cat /etc/hosts
[...]
192.168.1.101   agent1.puppet.local.net agent1
192.168.1.102   agent2.puppet.local.net agent2
# Add Line to puppet

within Cup of Coffee prepare all Update with Sync from Master to all Agents 😀 will Done 

 

sys-unconfig: Reconfiguring network settings on a Solaris box

This command will unconfig a solaris box back to its original state. It takes you back to the original setup for the network settings for the box. You will be prompted for the ip address, default route, etc just as if you were re-installing the operating system.

Command:

#sys-unconfig

Output:

/# sys-unconfig

WARNING

This program will unconfigure your system. It will cause it

to revert to a “blank” system – it will not have a name or know

about other systems or networks.

This program will also halt the system.

Do you want to continue (y/n) ? n

Oracle releases full virtualized Oracle Solaris 11

9 November, 2011
By Mark Cox

Oracle has announced availability of Oracle Solaris 11, the first fully virtualized OS, which it is terming the first Cloud OS. Oracle Solaris 11 is designed to meet the security, performance and scalability requirements of cloud-based deployments allowing customers to run their most demanding enterprise applications in private, hybrid, or public clouds.

Oracle Solaris 11 provides comprehensive, built-in virtualization capabilities for OS, network and storage resources. In addition to its built-in virtualization capabilities, Oracle Solaris 11 is engineered for Oracle VM server virtualization on both x86 and SPARC based systems, providing deployment flexibility and secure live migration.

“Oracle Solaris 11 is the most significant operating system release of the past decade,” said John Fowler, executive vice president, Systems, Oracle. “With built-in server, storage and now, network virtualization, Oracle Solaris 11 delivers the industry’s first cloud OS. Customers can simplify their enterprise deployments, drive up utilization of their data center assets, and run Oracle and other enterprise applications faster all within a secure, scalable cloud or traditional enterprise environment.”

Oracle Solaris Zones virtualization scales up to hundreds of zones per physical node at a 15x lower overhead than VMware and without artificial limits on memory, network, CPU and storage resources. New, integrated network virtualization allows customers to create high-performance, low-cost data center topologies within a single OS instance for ultimate flexibility, bandwidth control and observability.

Oracle Solaris 11 runs business-critical enterprise applications in virtualized massive horizontal scale as well as vertically integrated environments on a wide range of SPARC and x86 servers. Customers can run any of the more than 11,000 applications supported today on Oracle Solaris 11, with guaranteed binary compatibility through the Oracle Solaris Binary Application Guarantee Program. Customers can also preserve their existing investments by using P2V and V2V tools to move their existing Oracle Solaris 10 environments to an Oracle Solaris 10 Zone, while gaining access to the latest Oracle Solaris 11 enhancements.

“With Oracle Solaris, we have a unique opportunity to support the industry’s largest and leading business software portfolio on the industry’s best UNIX for both SPARC and x86 servers,” said Thomas Kurian, executive vice president of Oracle Product Development. “Working together, our development teams engineer, test and support advanced solutions to our customers’ toughest problems. We are pleased to announce that our key software products–Oracle Database 11g, Oracle Fusion Middleware, Oracle Enterprise Manager, and Oracle Applications–are available and optimized for Oracle Solaris 11 on SPARC and x86 systems.”

Oracle Enterprise Manager Ops Center, now included in systems support, provides converged systems management, enabling enterprise wide, centralized control over hardware, OS and virtualization resources.

Oracle Solaris 11, Oracle Enterprise Manager Ops Center and Oracle VM software are included as part of systems support with all of Oracle’s Sun servers, providing customers with built-in cloud capabilities.

Partners in Oracle Partner Network (OPN) will find new Oracle Solaris 11 tools and resources in the Oracle Solaris Knowledge Zone including the Oracle Solaris Remote Lab and the Oracle Solaris Development Initiative. New Oracle Solaris 11 Training is also available to help customers and partners take advantage of the best-in-class features of Oracle Solaris 11 and upgrade from Oracle Solaris 10 or earlier versions.

 

Oracle Solaris11 OS

What’s new on the Solaris 11 Desktop?


By Calum on Nov 09, 2011 blog

Much has been written today about the enterprise and cloud features of Oracle Solaris 11, which was launched today, but what’s new for those of us who just like to have the robustness and security of Solaris on our desktop machines? Here are a few of the Solaris 11 desktop highlights:

Creating an IPMP Group on Solaris 11

Creating an IPMP Group on Solaris 11

Solaris 11 brings some fantastic new networking features in the form of Project Crossbow. These features include virtual network interface cards (vnics) and virtual switching (etherstubs), flows for controlling bandwidth and network utilisation and more detailed analytics and observability functions just to name a few. Along with these advanced new features, many of our old favourites are enhanced to make life easier for the already busy systems administrator.

Gone are the days of messing with /etc/hostname.* files to create your IP multipath groups. Solaris 11 gives you a new command – ipadm – to help you cut through all the tedium of setting up IPMP:

root@dinkum:/# ipadm create-ipmp ipmp0
root@dinkum:/# ipadm add-ipmp -i e1000g1 -i e1000g2 ipmp0

It really is as simple as that.