OpenSolaris

All OpenSolaris world and all info about it

Basic Puppet Installation at Solaris 11.2

Review and Install Puppet at Solaris 11.2 it is useful in practice to understand how Puppet work at Solaris to manage multiple system and distributed configuration from Central System

Our Example to install Puppet at Global Zone and 2 Non-Global Zone and test /etc/hosts between Puppet Server master with Puppet Client agent, this Tutorial it assume you have Global Zone ‘master’ and 2 Non-Global Zone ‘agent1, agent2’

Preparation

I’m already prepare /etc/hosts file at our puppet server ‘master’ and clients ‘agent1, agent2’ The /etc/hosts of all zones (either global and Non-global zone) is now like this below

root@agent1:~# cat /etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost loghost
192.168.1.100    master.puppet.local.net master kusanagi
192.168.1.101   agent1.puppet.local.net agent1
192.168.1.102   agent2.puppet.local.net agent2

And all host name for all OS

root@master:~# hostname master.puppet.local.net
root@agent1:~# hostname agent1.puppet.local.net
root@agent2:~# hostname agent2.puppet.local.net

Installing and Configuring Puppet

To Start Puppet installation it is quite easy just like to install Regular Solaris pkg from Repository ‘pkg install puppet’ and this should be apply at all OS we have it to complete our practice then prepare Puppet server and enable from Solaris SMF to enable

 

root@master:~# pkg install puppet
root@master:/etc/puppet/ssl# svccfg -s puppet:master setprop config/server = master.puppet.local.net root@master:/etc/puppet/ssl# svccfg -s puppet:master refresh root@master:/etc/puppet/ssl# svcadm enable puppet:master
root@master:/etc/puppet/manifests# svcs puppet
STATE          STIME    FMRI
disabled       Mai_19   svc:/application/puppet:agent
online         Mai_19   svc:/application/puppet:master

Prepare Agent to have testing connectivity between our Agent and Master

root@agent1:# puppet agent --test --server master.puppet.local.net
Info: Creating a new SSL key for agent1.puppet.local.net
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent1.puppet.local.net
Info: Certificate Request fingerprint (SHA256): 14:20:1E:C8:D8:78:1D:DF:9C:92:75:F2:72:C6:61:61:AC:56:82:06:FC:A4:6D:5E:DA:5F:7E:12:80:5B:90:A9
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled
root@agent1:~#

The Connection between the agent and the master is protect by SSL, at the first execution the agent creates a key and certificate request, and We have to repeat this command in another client

root@agent2:~# puppet agent --test --server master.puppet.local.net
Info: Creating a new SSL key for agent2.puppet.local.net
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent2.puppet.local.net
Info: Certificate Request fingerprint (SHA256): 76:A7:23:A7:4D:41:66:DD:71:B0:4E:AA:62:EC:0B:DB:61:59:BE:56:43:15:E7:BA:C3:CB:AF:D3:98:D3:30:18
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled
root@agent2:~#

From Puppet Server ‘master’ we can check all certificate generated for each agent by command

root@master:/etc/puppet/ssl# puppet cert list
  "agent1.puppet.local.net" (SHA256) 28:20:1E:C8:D8:78:1D:DF:6G:92:75:F2:72:C6:61:61:AC:56:82:06:FC:A4:6D:5E:DA:5F:7E:12:80:5B:90:A9
  "agent2.puppet.local.net" (SHA256) 71:A7:23:A7:4D:41:66:SS:71:B0:4E:AA:62:EC:0B:DB:61:59:BE:56:43:15:E7:BA:C3:CB:AF:D3:98:D3:30:18

Now we have two request to sign, Lets sign them

root@master:/etc/puppet/ssl# puppet cert sign agent1.puppet.local.net
Notice: Signed certificate request for agent1.puppet.local.net
Notice: Removing file Puppet::SSL::CertificateRequest agent1.puppet.local.net at '/etc/puppet/ssl/ca/requests/
root@master:/etc/puppet/ssl# puppet cert sign agent2.puppet.local.net
Notice: Signed certificate request for agent2.puppet.local.net
Notice: Removing file Puppet::SSL::CertificateRequest agent2.puppet.local.net at '/etc/puppet/ssl/ca/requests/agent2.puppet.local.net.pem'

We can check each agent connectivity deferent output

Agent1 

root@agent1:~# puppet agent --test --server=master.puppet.local.net
Info: Caching certificate for agent1.puppet.local.net
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for agent1.puppet.local.net
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
[...]
Info: Caching catalog for agent1.puppet.local.net
Info: Applying configuration version '1400520716'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.09 seconds
root@agent1:~# svccfg -s puppet:agent setprop config/server = master.puppet.local.net
root@agent1:~#  svccfg -s puppet:agent refresh
root@agent1:~# svcadm enable puppet:agent

Agent2

root@agent2:~# puppet agent --test --server master.puppet.local.net
Info: Caching certificate for agent2.puppet.local.net
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for agent2.puppet.Local.net
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
[...]
Info: Caching catalog for agent2.puppet.local.net
Info: Applying configuration version '1400520716'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.09 seconds
root@agent2:~# svccfg -s puppet:agent setprop config/server = master.puppet.local.net
root@agent2:~# svccfg -s puppet:agent refresh
root@agent2:~# svcadm enable puppet:agent

Connectivity Tested and Done between Agents and master, will do some easy and more important in System admin to configure /etc/hosts to create Puppet modeule called etchosts which delivers an /etc/hosts file and the file is at /etc/puppet/modules/etchosts/files, which can be accessed Puppet under the URL puppet://modules/etchosts/hosts.

I will go through each steps at Puppet master and Puppet agents to show you how we can configure it

root@master:~# mkdir /etc/puppet/modules/etchosts
root@master:~# mkdir /etc/puppet/modules/etchosts/files
root@master:~# mkdir /etc/puppet/modules/etchosts/manifests
root@master:~# cp /etc/hosts /etc/puppet/modules/etchosts/files/hosts

Now we will give some life to the module /etc/hosts

root@master:~# cat << EOT > /etc/puppet/modules/etchosts/manifests/init.pp
> class etchosts {
>         file { "/etc/hosts":
>                 source => 'puppet:///modules/etchosts/hosts',
>         }
> }
> EOT

Afterwards we creating a file just including the nodes definition file:

root@master:~# cat << EOT > /etc/puppet/manifests/sites.pp
> import 'nodes.pp'
> EOT

At last we define the behaviour for the “default” node:

root@master:~# cat << EOT > /etc/puppet/manifests/nodes.pp
import 'etchosts'

node 'default' {
 include etchosts
}

I’m just add a line to the /etc/puppet/modules/etchosts/files/hosts file. 

echo "# Add Line to puppet" >> /etc/puppet/modules/etchosts/files/hosts

Okay, now log into one of the zones with an agent and check the current /etc/host

hosam@agent2:~$ cat /etc/hosts
[...]
192.168.1.102   agent2.puppet.local.net agent2
hosam@agent2:~$

At the moment there there isn’t the additional line in the file. You can now wait for 1800 seconds at maximum (because the agent checks every 1800 seconds if there is something to do by default) or you can force the check. Let’s force it.

root@agent2:~# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for agent2.puppet.local.net
Info: Applying configuration version '1400529373'
Notice: /Stage[main]/Etchosts/File[/etc/hosts]/content:
--- /etc/hosts  Tue May 20 08:10:32 2014
+++ /tmp/puppet-file20140520-8490-tfgwja        Tue May 20 08:10:02 2014
@@ -11,2 +11,3 @@
 192.168.1.102  agent2.puppet.local.net agent2
+# Add Line to puppet

Info: /Stage[main]/Etchosts/File[/etc/hosts]: Filebucketed /etc/hosts to puppet with sum 38f6c964aab77edb2ff938094f13e2d0
Notice: /Stage[main]/Etchosts/File[/etc/hosts]/content: content changed '{md5}38f6c964aab77edb2ff938094f13e2d0' to '{md5}49b07e8c62ed409a01216bf9a35ae7ae'
Notice: Finished catalog run in 0.60 seconds

Now let’s check the file again … et voila … you find the line you have added into the /etc/puppet/modules/etchosts/files/hosts in the /etc/hosts of you file.

root@agent2:~# cat /etc/hosts
[...]
192.168.1.101   agent1.puppet.local.net agent1
192.168.1.102   agent2.puppet.local.net agent2
# Add Line to puppet

within Cup of Coffee prepare all Update with Sync from Master to all Agents 😀 will Done 

 

How to change Solaris Regulare User to have root Role

* Create Requlare User as Normal Solaris User 

root@node2# useradd -c “Comment” -d /export/home/User_Name User_Name
root@node2# useradd -c “Regulare User with root Role” -d /export/home/Support Support
                      -c Comment
                      -d User Home Directory
                      – User_Name

* Assign Password for User Support with Specifies the repository to which an operation is
applied. The supported repositories are files, ldap, or nis.
root@node2:/# passwd -r files Support
New Password:
Re-enter new Password:
passwd: password successfully changed for Support

* Assign the root Role to User Account (Support)
root@node2:/# usermod -R root Support

* Login by New User (Support)
bash-4.1$ whoami
Support
bash-4.1$ roles
root

Consolidate Oracle Application on Oracle Exadata

Oracle Exadata WebCast

 

  • Why Consolidate on Exadata?
  • Steps to Successful Consolidation
  • Findings from Our Internal Case Study
  • Key Takeaways

Oracle Exadata WebCast to download pdf file for more information to this WebCast

c0t0d0s0.org- Close

Oracle related blog closure is that of Joerg Moellenkamp who has closed his Solaris oriented c0t0d0s0.org blog, which was home to the “Lesser Known Solaris Features” (LKSF) tutorial. Moellenkamp, an Oracle employee, says the offlining of the blog is not because of health issues, that there were no technical problems and no copyright issues, but beyond that he will not say why he has taken the decision. The LKSF tutorial has been saved from the shutdown and is available to downloadPDF. The closure of the c0t0d0s0 blog does though reduce the already meagre number of Solaris blogs further.

It is worth noting that Oracle, as a company, has never been comfortable with employee blogging. Oracle appears to regard blogs as mostly a function of its public relations department and it is said that the complexity of working with the PR department to approve postings is sufficient that many either give up or don’t try.

 

 

VNCserver on Solaris

Install & configure Xvncserver on Solaris10 OS

You will find SFWvnc package from Solaris10 companion-i386-sol10 CD  You can download it from

http://www.realvnc.com/ or install it with command

bash-3.00# pkgadd -d . SFWvnc

and after install vncserver on Solaris you will have service FMRI for it

svcs -a | grep -i x11

svc:/application/x11/xvcn-inetd:default

and edit 2 files

1 – vi /etc/services or

echo  > vnc-server      5900/tcp       #Xvnc Server Added by Hosam

2 – vi /etc/X11/gdm/gdm.conf

[xdmcp]

Enable = true

[Security]

DisallowTCP = false

AllowRoot = true

AllowRemoteRoot = true

and just run the service from FMRI

svcadm enable svc:/application/x11/xvnc-inetd:default

and try to run vncview from any platform to connect to Solaris BOX 😉

Tmpfs and Bind Mounts

Introduction

In my previous articles in this series, I introduced the benefits of journaling and the ReiserFS and showed how to set up a rock-solid ReiserFS system. In this article, we’re going to tackle a couple of semi-offbeat topics. First, we’ll take a look at tmpfs, also known as the virtual memory (VM) filesystem. Tmpfs is probably the best RAM disk-like system available for Linux right now, and was introduced with Linux kernel 2.4. Then, we’ll take a look at another capability introduced with Linux kernel 2.4 called “bind mounts”, which allow a great deal of flexibility when it comes to mounting (and remounting) filesystems.

Introducing Tmpfs

If I had to explain tmpfs in one breath, I’d say that tmpfs is like a ramdisk, but different. Like a ramdisk, tmpfs can use your RAM, but it can also use your swap devices for storage. And while a traditional ramdisk is a block device and requires a mkfs command of some kind before you can actually use it, tmpfs is a filesystem, not a block device; you just mount it, and it’s there. All in all, this makes tmpfs the niftiest RAM-based filesystem I’ve had the opportunity to meet.

Tmpfs and VM

Let’s take a look at some of tmpfs’s more interesting properties. As I mentioned above, tmpfs can use both RAM and swap. This might seem a bit arbitrary at first, but remember that tmpfs is also known as the “virtual memory filesystem”. And, as you probably know, the Linux kernel’s virtual memory resources come from both your RAM and swap devices. The VM subsystem in the kernel allocates these resources to other parts of the system and takes care of managing these resources behind-the-scenes, often transparently moving RAM pages to swap and vice-versa.

The tmpfs filesystem requests pages from the VM subsystem to store files. tmpfs itself doesn’t know whether these pages are on swap or in RAM; it’s the VM subsystem’s job to make those kinds of decisions. All the tmpfs filesystem knows is that it is using some form of virtual memory.

Not a Block Device

Here’s another interesting property of the tmpfs filesystem. Unlike most “normal” filesystems, like ext3, ext2, XFS, JFS, ReiserFS and friends, tmpfs does not exist on top of an underlying block device. Because tmpfs sits on top of VM directly, you can create a tmpfs filesystem with a simple mount command:

# mount tmpfs /mnt/tmpfs -t tmpfs

After executing this command, you’ll have a new tmpfs filesystem mounted at /mnt/tmpfs, ready for use. Note that there’s no need to run mkfs.tmpfs; in fact, it’s impossible, as no such command exists. Immediately after the mount command, the filesystem is mounted and available for use, and is of type tmpfs. This is very different from how Linux ramdisks are used; standard Linux ramdisks are block devices, so they must be formatted with a filesystem of your choice before you can use them. In contrast, tmpfs is a filesystem. So, you can just mount it and go.

Tmpfs Advantages

Dynamic Filesystem Size

You’re probably wondering about how big that tmpfs filesystem was that we mounted at /mnt/tmpfs, above. The answer to that question is a bit unexpected, especially when compared to disk-based filesystems. /mnt/tmpfs will initially have a very small capacity, but as files are copied and created, the tmpfs filesystem driver will allocate more VM and will dynamically increase the filesystem capacity as needed. And, as files are removed from /mnt/tmpfs, the tmpfs filesystem driver will dynamically shrink the size of the filesystem and free VM resources, and by doing so return VM into circulation so that it can be used by other parts of the system as needed. Since VM is a precious resource, you don’t want anything hogging more VM than it actually needs, and the great thing about tmpfs is that this all happens automatically.

Speed

The other major benefit of tmpfs is its blazing speed. Because a typical tmpfs filesystem will reside completely in RAM, reads and writes can be almost instantaneous. Even if some swap is used, performance is still excellent and those parts of the tmpfs filesystem will be moved to RAM as more free VM resources become available. Having the VM subsystem automatically move parts of the tmpfs filesystem to swap can actually be good for performance, since by doing so, the VM subsystem can free up RAM for processes that need it. This, along with its dynamic resizing abilities, allow for much better overall OS performance and flexibility than the alternative of using a traditional RAM disk.

No Persistence

While this may not seem like a positive, tmpfs data is not preserved between reboots, because virtual memory is volatile in nature. I guess you probably figured that tmpfs was called “tmpfs” for a reason, didn’t you? However, this can actually be a good thing. It makes tmpfs an excellent filesystem for holding data that you don’t need to keep, such as temporary files (those found in /tmp) and parts of the /var filesystem tree.

Using Tmpfs

To use tmpfs, all you need is a modern (2.4+) kernel with Virtual memory file system support (former shm fs) enabled; this option lives under the File systems section of the kernel configuration options. Once you have a tmpfs-enabled kernel, you can go ahead and mount tmpfs filesystems. In fact, it’s a good idea to enable tmpfs in all your kernels if you compile them yourself – whether you plan to use tmpfs or not. This is because you need to have kernel tmpfs support in order to use POSIX shared memory. System V shared memory will work without tmpfs in your kernel, however. Note that you do not need a tmpfs filesystem to be mounted for POSIX shared memory to work; you simply need the support in your kernel. POSIX shared memory isn’t used too much right now, but this situation will likely change as time goes on.

Avoiding low VM conditions

The fact that tmpfs dynamically grows and shrinks as needed makes one wonder: what happens when your tmpfs filesystem grows to the point where it exhausts all of your virtual memory, and you have no RAM or swap left? Well, generally, this kind of situation is a bit ugly. With kernel 2.4.4, the kernel would immediately lock up. With more recent kernels, the VM subsystem has in many ways been fixed, and while exhausting VM isn’t exactly a wonderful experience, things don’t blow up completely, either. When a modern kernel gets to the point where it can’t allocate any more VM, you obviously won’t be unable to write any new data to your tmpfs filesystem. In addition, it’s likely that some other things will happen. First, the other processes on the system will be unable to allocate much more memory; generally, this means that the system will most likely become extremely sluggish and almost unresponsive. Thus, it may be tricky or unusually time-consuming for the superuser to take the necessary steps to alleviate this low-VM condition.

In addition, the kernel has a built-in last-ditch system for freeing memory when no more is available; it’ll find a process that’s hogging VM resources and kill it. Unfortunately, this “kill a process” solution generally backfires when tmpfs growth is to blame for VM exhaustion. Here’s the reason. Tmpfs itself can’t (and shouldn’t) be killed, since it is part of the kernel and not a user process, and there’s no easy way for the kernel to find out which process is filling up the tmpfs filesystem. So, the kernel mistakenly attacks the biggest VM-hog of a process it can find, which is generally your X server if you happen to be running one. So, your X server dies, and the root cause of the low-VM condition (tmpfs) isn’t addressed. Ick.

Low VM: the solution

Fortunately, tmpfs allows you to specify a maximum upper bound for the filesystem size when a filesystem is mounted or remounted. Actually, as of kernel 2.4.6 and util-linux-2.11g, these parameters can only be set on mount, not on remount, but we can expect them to be settable on remount sometime in the near future. The optimal maximum tmpfs size setting depends on the resources and usage pattern of your particular Linux box; the idea is to prevent a completely full tmpfs filesystem from exhausting all virtual memory and thus causing the ugly low-VM conditions that we talked about earlier. A good way to find a good tmpfs upper-bound is to use top to monitor your system’s swap usage during peak usage periods. Then, make sure that you specify a tmpfs upper-bound that’s slightly less than the sum of all free swap and free RAM during these peak usage times.

Creating a tmpfs filesystem with a maximum size is easy. To create a new tmpfs filesystem with a maximum filesystem size of 32 MB, type:

# mount tmpfs /dev/shm -t tmpfs -o size=32m

This time, instead of mounting our new tmpfs filesystem at /mnt/tmpfs, we created it at /dev/shm, which is a directory that happens to be the “official” mount point for a tmpfs filesystem. If you happen to be using devfs, you’ll find that this directory has already been created for you.

Also, if we want to limit the filesystem size to 512 KB or 1 GB, we can specify size=512k and size=1g, respectively. In addition to limiting size, we can also limit the number of inodes (filesystem objects) by specifying the nr_inodes=x parameter. When using nr_inodes, x can be a simple integer, and can also be followed with a k, m, or g to specify thousands, millions, or billions (!) of inodes.

Also, if you’d like to add the equivalent of the above mount tmpfs command to your /etc/fstab, it’d look like this:

tmpfs   /dev/shm        tmpfs   size=32m        0       0

Mounting On Top of Existing Mount Points

Back in the 2.2 days, any attempt to mount something to a mount point where something had already been mounted resulted in an error. However, thanks to a rewrite of the kernel mounting code, using mount points multiple times is not a problem. Here’s an example scenario: let’s say that we have an existing filesystem mounted at /tmp. However, we decide that we’d like to start using tmpfs for /tmp storage. In the old days, your only option would be to unmount /tmp and remount your new tmpfs /tmp filesystem in its place, as follows:

#  umount /tmp
#  mount tmpfs /tmp -t tmpfs -o size=64m

However, this solution may not work for you. Maybe there are a number of running processes that have open files in /tmp; if so, when trying to unmount /tmp, you’d get the following error:

umount: /tmp: device is busy

However, with Linux 2.4+, you can mount your new /tmp filesystem without getting the “device is busy” error:

# mount tmpfs /tmp -t tmpfs -o size=64m

With a single command, your new tmpfs /tmp filesystem is mounted at /tmp, on top of the already-mounted partition, which can no longer be directly accessed. However, while you can’t get to the original /tmp, any processes that still have open files on this original filesystem can continue to access them. And, if you umount your tmpfs-based /tmp, your original mounted /tmp filesystem will reappear. In fact, you can mount any number of filesystems to the same mount point, and the mount point will act like a stack; unmount the current filesystem, and the last-most-recently mounted filesystem will reappear from underneath.

Bind Mounts

Using bind mounts, we can mount all, or even part of an already-mounted filesystem to another location, and have the filesystem accessible from both mount points at the same time! For example, you can use bind mounts to mount your existing root filesystem to /home/drobbins/nifty, as follows:

#  mount --bind / /home/drobbins/nifty

Now, if you look inside /home/drobbins/nifty, you’ll see your root filesystem (/home/drobbins/nifty/etc, /home/drobbins/nifty/opt, etc.). And if you modify a file on your root filesystem, you’ll see the modifications in /home/drobbins/nifty as well. This is because they are one and the same filesystem; the kernel is simply mapping the filesystem to two different mount points for us. Note that when you mount a filesystem somewhere else, any filesystems that were mounted to mount points inside the bind-mounted filesystem will not be moved along. In other words, if you have /usr on a separate filesystem, the bind mount we performed above will leave /home/drobbins/nifty/usr empty. You’ll need an additional bind mount command to allow you to browse the contents of /usr at /home/drobbins/nifty/usr:

#  mount --bind /usr /home/drobbins/nifty/usr

Bind mounting parts of filesystems

Bind mounting makes even more neat things possible. Let’s say that you have a tmpfs filesystem mounted at /dev/shm, its traditional location, and you decide that you’d like to start using tmpfs for /tmp, which currently lives on your root filesystem. Rather than mounting a new tmpfs filesystem to /tmp (which is possible), you may decide that you’d like the new /tmp to share the currently mounted /dev/shm filesystem. However, while you could bind mount /dev/shm to /tmp and be done with it, your /dev/shm contains some directories that you don’t want to appear in /tmp. So, what do you do? How about this:

# mkdir /dev/shm/tmp
# chmod 1777 /dev/shm/tmp
# mount --bind /dev/shm/tmp /tmp

In this example, we first create a /dev/shm/tmp directory and then give it 1777 perms, the proper permissions for /tmp. Now that our directory is ready, we can mount /dev/shm/tmp, and only /dev/shm/tmp to /tmp. So, while /tmp/foo would map to /dev/shm/tmp/foo, there’s no way for you to access the /dev/shm/bar file from /tmp.

As you can see, bind mounts are extremely powerful and make it easy to make modifications to your filesystem layout without any fuss. Next article, we’ll check out devfs; for now, you may want to check out the following resources.

lofiadm Solaris Mount an ISO Image

Just like linux loopback device Sun Solaris UNIX has lofi loopback file driver. The lofi file driver exports a file as a block device. Reads and writes to the block device are translated to reads and writes on the underlying file. This is useful when the file
contains a file system image (such as ISO image). Exporting it as a block device
through the lofi file driver allows normal system utilities to operate on the image through the block device like mount and fsck. This is useful for accessing CD-ROM and FAT floppy images.

lofiadm is command you need to use mounting an existing CD-ROM image under Sun Solaris UNIX. This is useful when the file contains an image of some flesystem (such as a floppy or CD-ROM image), because the block device can then be used with the normal system utilities for mounting, checking or repairing filesystem

Mounting an Existing ISO CD-ROM Image under Solaris UNIX

We have image with name cd.iso, you can type command

# lofiadm -a /path/to/cd.iso

Output:

/dev/lofi/1

Please note that the file name argument on lofiadm must be fully  qualified and the path must be absolute

not relative (thanks to mike for  tip). /dev/lofi/1 is the device, use the same to mount iso image

with mount command:

# mount -o ro -F hsfs /dev/lofi/1 /mnt
# cd /mnt
# ls -l
# df -k /mnt

Mount the loopback device as a randomly accessible file system with

#mount -F hsfs -o ro /dev/lofi/X /mnt.

Alternatively, use this combined format:

#mount -F hsfs -o ro `lofiadm -a /path/to/image.iso` /mnt

Unmount and detach the images

Use umount command to unmount image:
# umount /mnt

Now remove/free block device:
# lofiadm -d /dev/lofi/1

For more information read lofiadm and lofi man pages by typing the  following command:

man lofiadm

UNIX File System – UFS

UFS

UFS in its various forms has been with us since the days of BSD on VAXen the size of refrigerators. The basic UFS concepts thus date back to the early 1980s and represent the second pass at a workable UNIX filesystem, after the very slow and simple filesystem that shipped with the truly ancient Version 7 UNIX. Almost all commercial UNIX OSs have had a UFS, and ext3 in Linux is similar to UFS in design. Solaris inherited UFS through SunOS, and SunOS in turn got it from BSD.

Until recently, UFS was the only filesystem that shipped with Solaris. Unlike HP, IBM, SGI, and DEC, Sun did not develop a next-generation filesystem during the 1990s. There are probably at least two reasons for this: most competitors developed their new filesystems using third party code which required per-system royalties, and the availability of VxFS from Veritas. Considering that a lot of the other vendors’ filesystem IP was licensed from Veritas anyway, this seems like a reasonable decision.

Solaris 10 can only boot from a UFS root filesystem. In the future, ZFS boot will be available, as it already is in OpenSolaris. But for now, every Solaris system must have at least one UFS filesystem.

UFS is old technology but it is a stable and fast filesystem. Sun has continuously tuned and improved the code over the last decade and has probably squeezed as much performance out of this type of FS as is possible. Journaling support was added in Solaris 7 at the turn of the century and has been enabled by default since Solaris 9. Before that, volume level journaling was available. In this older scheme, changes to the raw device are journaled, and the filesystem is not journaling-aware. This is a simple but inefficient scheme, and it worked with a small performance penalty. Volume level journaling is now end-of-lifed, but interestingly, the same sort of system seems to have been added to FreeBSD recently. What is old is new again.

UFS is accompanied by the Solaris Volume Manager, which provides perfectly servicible software RAID.

Where does UFS fit in in 2008? Besides booting, it provides a filesystem which is stable and predictable and better integrated into the OS than anything else. ZFS will probably replace it eventually, but for now, it is a good choice for databases, which have usually been tuned for a traditional filesystem’s access characteristics. It is also a good choice for the pathologically conservative administrator, who may not have an exciting job, but who rarely has his nap time interrupted.

Configuring the NFS Server for Sharing Resources

When the mountd and nfsd daemons are running, you can use the share command to make file resources available:

share [ -F nfs ] [ -o options ] [ -d description ] [ pathname ]

where:

share Command Options

Option Description
-F nfs Specifies the file system type. This option is not typically required, because NFS is the default remote file system type.
-o options Controls a client’s access to an NFS shared resource.
-d description Describes the shared file resource.
pathname Specifies the absolute path name of the resource for sharing.

Note: Unless you specify an option to the share command, for example, -F nfs, the system

uses the file system type from the first line of the /etc/dfs/fstypes file

To share a file resource from the command line, you can use the share command. For

example, to share the /usr/local/data directory as a read-only shared resource, perform the command:

#share -o ro /usr/local/data

By default, NFS-mounted resources are available with read and write privileges based on standard Solaris OS file permissions. Access decisions are based ona comparison of the user ID (UID) of the client and the owner.

The share Command

Option Description
ro
Informs clients that the server accepts only read requests.
rw
Allows the server to accept read and write requests from the client
root=client
Informs clients that the root user on the specified client system or systems can perform superuser-privileged requests on the shared resource
ro=access-list
Allows read requests from the specified access list
rw=access-list
 
Allows read and write requests from the specified access list, as shown in the table

Access List Options

Option Description
access-list=client:client
Allows access based on a colon-separated list of one
access-list=@network
Allows access based on a network number (for example, @192.168.100) or a network name (for example, @mydomain.com). The network name must be defined in the /etc/networks file.
access-list=.domain
Allows access based on a Domain Name System (DNS) domain; the dot (.) identifies the value as a DNS domain.
access-list=netgroup_name
Allows access based on a configured net group (Network Information Service [NIS] or Network Information Service Plus [NIS+] only).
anon=n
 
Sets n to be the effective user ID (EUID) of anonymous users. By default, anonymous users are given the EUID 60001 (UID_NOBODY). If n is set to -1, access is denied.

You can combine these options by separating each option with commas, which forms

intricate accessrestrictions. The following examples show some of the more commonly used options:

# share -F nfs -o ro directory
This command restricts access to NFS-mounted resources to read-only access.
# share -F nfs -o ro,rw=client1 directory
This command restricts access to NFS-mounted resources to read-only access;
however, the NFS server accepts both read and write requests from the client
named client1.
# share -F nfs -o root=client2 directory
This command allows the root user on the client named client2 to have
superuser access to the NFS-mounted resources.
# share -F nfs -o ro,anon=0 directory
By setting the option anon=0, the EUID for access to shared resources
by an anonymous user is set to 0.The access is also set to read-only.
While setting the EUID to 0, the same UID as the root user,
might seem to open up security access, the UID of 0 is converted to
the user identity of nobody.This has the effect that an anonymous user
from a client host,where the UID of that user is not known
on the server host, is treated as the user called nobody by
the server (UID=60001).

# share -F nfs \
-o ro=client1:client2,rw=client3:client4,root=client4 directory
This command shares the directory to the four named hosts only. The hosts,
client1 and client2,have read-only access. The hosts client3 and client4 have
read-write access. The root user from host client4 has root privilege access
to the shared directory and its contents.
The share command writes information for all shared file resources to the
/etc/dfs/sharetab file. The file contains a table of the local shared resources.

Note: If no argument is specified, the share command displays a list of
all the currently shared file resources.

# share
-              /usr/local/data   ro   "Shared data files"
-              /rdbms_files   rw,root=sys01   "Database files"

Making File Resources Unavailable for Mounting

Use the unshare command to make file resources unavailable for mount operations. This command reads the /etc/dfs/sharetab file.
unshare [ -F nfs ] pathname
Where:

unshare Command
Option Description
-F nfs
Specifies NFS as the file system type. Because NFS is the default remote file system type, you do not have to specify this option.
pathname
Specifies the path name of the file resource to unshare.

For example, to make the /export/sys44_data directory unavailable for

client-side mount operations,perform the command:# unshare /usr/local/data

Sharing and Unsharing All NFS Resources

Use the shareall and unshareall commands to share and unshare all NFS resources.

The shareall command, when used without arguments, shares all resources listed

in the /etc/dfs/dfstab file.

shareall [ -F nfs ]

The unshareall command, when used without arguments, unshares currently 

shared file resources listed in the /etc/dfs/sharetab file.
unshareall [ -F nfs ]