This page describes my experiences installing a NetApp Data ONTAP Simulator, release 8.1.2 in 7-mode.
Apologies for any typos; I don’t have a serial console for this, so I retyped everything by hand (until I had SSH access). Also the timestamps jump forward (and the serial numbers change!) as I was doing this in my ‘spare’ time, and over several attempts.
Table of contents:
The NetApp simulator is only available to licensed NetApp end users. If you have a valid login, you can retrieve it from:
After importing the VM into ESXi/Fusion/Workstation, adjust the VMware configuration to suit your environment; for me, this meant disabling three of the four NICs, as I only have one VM network.
Boot up the VM. When prompted, interrupt the boot process with ^C:
BTX loader 1.00 BTX version is 1.02 Consoles: internal video/keyboard BIOS drive A: is disk0 BIOS drive C: is disk1 BIOS drive D: is disk2 BIOS drive E: is disk3 BIOS drive F: is disk4 BIOS 638kB/1636288kB available memory FreeBSD/i386 bootstrap loader, Revision 1.1 (root@bldlsvl61.eng.netapp.com, Tue Oct 30 19:56:04 PDT 2012) Loading /boot/defaults/loader.conf Hit [Enter] to boot immediately, or any other key for command prompt. Booting... x86_x64/freebsd/image1/kernel data=0x80f178+0x138350 syms=[0x08+0x3b9a0+0x8+0x27619] x86_x64/freebsd/image1/platform.ko size 0x237fe8 at 0xaab000 NetApp Data ONTAP 8.1.2 7-Mode Copyright (C) 1992-2012 NetApp. All rights reserved. md1.uzip: 26368 x 16384 blocks md2.uzip: 3584 x 16384 blocks ******************************* * * * Press Ctrl-C for Boot Menu. * * * ******************************* ^C
If it’s your first time booting the VM, you may now see lines like:
Creating ,disks/v0.16:NETAPP__:VD-1000MB-FZ-520:12034300:2104448 Creating ,disks/v0.17:NETAPP__:VD-1000MB-FZ-520:12034301:2104448 Creating ,disks/v0.18:NETAPP__:VD-1000MB-FZ-520:12034302:2104448 Creating ,disks/v0.19:NETAPP__:VD-1000MB-FZ-520:12034303:2104448 Creating ,disks/v0.20:NETAPP__:VD-1000MB-FZ-520:12034304:2104448 Creating ,disks/v0.21:NETAPP__:VD-1000MB-FZ-520:12034305:2104448 Creating ,disks/v0.22:NETAPP__:VD-1000MB-FZ-520:12034306:2104448 Creating ,disks/v0.24:NETAPP__:VD-1000MB-FZ-520:12034307:2104448 Creating ,disks/v0.25:NETAPP__:VD-1000MB-FZ-520:12034308:2104448 Creating ,disks/v0.26:NETAPP__:VD-1000MB-FZ-520:12034309:2104448 Creating ,disks/v0.27:NETAPP__:VD-1000MB-FZ-520:12034310:2104448 Creating ,disks/v0.28:NETAPP__:VD-1000MB-FZ-520:12034311:2104448 Creating ,disks/v0.29:NETAPP__:VD-1000MB-FZ-520:12034312:2104448 Creating ,disks/v0.32:NETAPP__:VD-1000MB-FZ-520:12034313:2104448 Shelf file Shelf:DiskShelf14 updated Creating ,disks/v1.16:NETAPP__:VD-1000MB-FZ-520:13911400:2104448 Creating ,disks/v1.17:NETAPP__:VD-1000MB-FZ-520:13911401:2104448 Creating ,disks/v1.18:NETAPP__:VD-1000MB-FZ-520:13911402:2104448 Creating ,disks/v1.19:NETAPP__:VD-1000MB-FZ-520:13911403:2104448 Creating ,disks/v1.20:NETAPP__:VD-1000MB-FZ-520:13911404:2104448 Creating ,disks/v1.21:NETAPP__:VD-1000MB-FZ-520:13911405:2104448 Creating ,disks/v1.22:NETAPP__:VD-1000MB-FZ-520:13911406:2104448 Creating ,disks/v1.24:NETAPP__:VD-1000MB-FZ-520:13911407:2104448 Creating ,disks/v1.25:NETAPP__:VD-1000MB-FZ-520:13911408:2104448 Creating ,disks/v1.26:NETAPP__:VD-1000MB-FZ-520:13911409:2104448 Creating ,disks/v1.27:NETAPP__:VD-1000MB-FZ-520:13911410:2104448 Creating ,disks/v1.28:NETAPP__:VD-1000MB-FZ-520:13911411:2104448 Creating ,disks/v1.29:NETAPP__:VD-1000MB-FZ-520:13911412:2104448 Creating ,disks/v1.32:NETAPP__:VD-1000MB-FZ-520:13911413:2104448 Shelf file Shelf:DiskShelf14 updated
The boot menu appears after a time. Answer 4 when prompted:
Boot Menu will be available. Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. Selection (1-8)? 4
After this, a slew of debug & info messages will be spewed to the screen, along with a question that’s easy to miss:
Aug 29 18:15:28 [localhost:nv.fake:CRITICAL]: 32 MB system memory being used to simulate NVRAM. Aug 29 18:15:30 [localhost:netif.linkUp:info]: Ethernet e0a: Link up. Aug 29 18:15:33 [localhost:diskown.isEnabled:info]: software ownership has been enabled for this system Aug 29 18:15:33 [localhost:dcs.framework.enabled:info]: The DCS framework is enabled on this node. Aug 29 18:15:33 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig'. The full command line is 'ifconfig lo 127.0.0.1'. Aug 29 18:15:33 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv losk 127.0.20.1'. Aug 29 18:15:33 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv lo 0'. Aug 29 18:15:33 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv lo 127.0.0.1'. Aug 29 18:15:33 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'route_priv'. The full command line is 'route_priv add host 127.0.10.1 127.0.20.1 0'. add host 127.0.10.1: gateway 127.0.20.1 WAFL CPLEDGER is enabled. Checklist = 0x7ff841ff Aug 29 18:15:34 [localhost:wafl.memory.status:info]: 433MB of memory is currently available for the WAFL file system. Zero disks, reset config and install a new file system?: Aug 29 18:15:35 [localhost:netif.linkDown:info]: Ethernet e0d: Link down, check cable. Aug 29 18:15:35 [localhost:netif.linkDown:info]: Ethernet e0c: Link down, check cable. Aug 29 18:15:35 [localhost:netif.linkDown:info]: Ethernet e0b: Link down, check cable.
In case you missed it, the question is actually: Zero disks, reset config and install a new file system? Answer in the affirmative, twice for good measure:
Zero disks, reset config and install a new file system?: yes This will erase all the data on the disks, are you sure?: yes Rebooting to finish wipeconfig request. Skipped backing up /var file system to CF. Uptime: 1m23s System rebooting...
The system will restart. This time, let the boot process continue uninterrupted.
BTX loader 1.00 BTX version is 1.02 Consoles: internal video/keyboard BIOS drive A: is disk0 BIOS drive C: is disk1 BIOS drive D: is disk2 BIOS drive E: is disk3 BIOS drive F: is disk4 BIOS 638kB/1636288kB available memory FreeBSD/i386 bootstrap loader, Revision 1.1 (root@bldlsvl61.eng.netapp.com, Tue Oct 30 19:56:04 PDT 2012) Loading /boot/defaults/loader.conf Hit [Enter] to boot immediately, or any other key for command prompt. Booting... x86_x64/freebsd/image1/kernel data=0x80f178+0x138350 syms=[0x08+0x3b9a0+0x8+0x27619] x86_x64/freebsd/image1/platform.ko size 0x237fe8 at 0xaab000 NetApp Data ONTAP 8.1.2 7-Mode Copyright (C) 1992-2012 NetApp. All rights reserved. md1.uzip: 26368 x 16384 blocks md2.uzip: 3584 x 16384 blocks ******************************* * * * Press Ctrl-C for Boot Menu. * * * ******************************* Wipe filer procedure requested. Sep 02 13:57:25 [localhost:nv.fake:CRITICAL]: 32 MB system memory being used to simulate NVRAM. Sep 02 13:57:27 [localhost:netif.linkUp:info]: Ethernet e0a: Link up. Sep 02 13:57:30 [localhost:diskown.isEnabled:info]: software ownership has been enabled for this system Sep 02 13:57:30 [localhost:dcs.framework.enabled:info]: The DCS framework is enabled on this node. Sep 02 13:57:30 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig'. The full command line is 'ifconfig lo 127.0.0.1'. Sep 02 13:57:30 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv losk 127.0.20.1'. Sep 02 13:57:30 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv lo 0'. Sep 02 13:57:30 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig_priv'. The full command line is 'ifconfig_priv lo 127.0.0.1'. Sep 02 13:57:30 [localhost:kern.cli.cmd:debug]: Command line input: the command is 'route_priv'. The full command line is 'route_priv add host 127.0.10.1 127.0.20.1 0'. add host 127.0.10.1: gateway 127.0.20.1 WAFL CPLEDGER is enabled. Checklist = 0x7ff841ff Sep 02 13:57:31 [localhost:wafl.memory.status:info]: 433MB of memory is currently available for the WAFL file system. Zero disks, reset config and install a new file system?: Sep 02 13:57:32 [localhost:netif.linkDown:info]: Ethernet e0d: Link down, check cable. Sep 02 13:57:32 [localhost:netif.linkDown:info]: Ethernet e0c: Link down, check cable. Sep 02 13:57:32 [localhost:netif.linkDown:info]: Ethernet e0b: Link down, check cable. Sep 02 13:57:32 [localhost:coredump.host.spare.none:info]: No sparecore disk was found for host 0. ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................ Sep 02 13:58:13 [localhost:raid.disk.zero.done:notice]: Disk v5.16 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424400] : disk zeroing complete ............................. Sep 02 13:58:14 [localhost:raid.disk.zero.done:notice]: Disk v5.17 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424401] : disk zeroing complete Sep 02 13:58:19 [localhost:raid.disk.zero.done:notice]: Disk v5.18 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424402] : disk zeroing complete Sep 02 13:58:19 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/v5.18 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424402] to aggregate aggr0 has completed successfully Sep 02 13:58:19 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/v5.17 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424401] to aggregate aggr0 has completed successfully Sep 02 13:58:19 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/v5.16 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20424400] to aggregate aggr0 has completed successfully Sep 02 13:58:20 [localhost:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr0' UUID '31e04d24-32a9-11e4-b783-123478563412' was built in 0 msec after scanning 0 inodes and restarting -1 times with a final result of starting. Sep 02 13:58:20 [localhost:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr0' UUID '31e04d24-32a9-11e4-b783-123478563412' was built in 0 msec after scanning 0 inodes and restarting 0 times with a final result of success. Sep 02 13:58:20 [localhost:wafl.vol.add:notice]: Aggregate aggr0 has been added to the system. Sep 02 13:58:21 [localhost:fmmb.instStat.change:info]: no mailbox instance on local side. Sep 02 13:58:23 [localhost:fmmb.current.lock.disk:info]: Disk v5.16 is a local HA mailbox disk. Sep 02 13:58:23 [localhost:fmmb.current.lock.disk:info]: Disk v5.17 is a local HA mailbox disk. Sep 02 13:58:23 [localhost:fmmb.instStat.change:info]: normal mailbox instance on local side. exportfs [Line 1]: NFS not licensed; local volume /vol/vol0 not exported Sep 02 13:58:25 [localhost:secureadmin.ssl.setup.success:info]: Restarting SSL with new certificate.
Eventually you'll come to the basic setup prompts. Fill these out as appropriate for your setup, following the example below:
NetApp Release 8.1.2 7-Mode: Tue Oct 30 19:56:51 PDT 2012 System ID: 4055372820 () System Serial Number: 4055372-82-0 () Sep 02 13:58:25 [localhost:shelf.config.multipath:info]: All attached storage on the system is multi-pathed. System Storage Configuration: Multi-Path System ACP Connectivity: NA slot 0: System Board Procesors: 2 Memory Size: 1599 MB Memory Attributes: None slot 0: 10/100/1000 Ethernet Controller V e0a MAC Address: 00:0c:29:2a:c6:ba (auto-1000t-fd-up) e0b MAC Address: 00:0c:29:2a:c6:c4 (auto-unknown-down) e0c MAC Address: 00:0c:29:2a:c6:ce (auto-unknown-down) e0d MAC Address: 00:0c:29:2a:c6:d8 (auto-unknown-down) Please enter the new hostname []: adamantium Do you want to enable IPv6? [n]: ^M Please enter the IP address for Network Interface e0a []: 172.16.11.51 Please enter the netmask for Network Interface e0a [255.255.0.0]: 255.255.255.0 Please enter media type for e0a {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)} [auto]: ^M Please enter flow control for e0a {none, receive, send, full} [full]: ^M Do you want e0a to support jumbo frames? [n]: ^M Please enter the IP address for Network Interface e0b []: ^M Please enter the IP address for Network Interface e0c []: ^M Please enter the IP address for Network Interface e0d []: ^M Please enter the name or IP address of the IPv4 default gateway: 172.16.11.1 The administration host is given root access to the filer's /etc files for system administration. To allow /etc root access to all NFS clients enter RETURN below. Please enter the name or IP address of the administration host: ^M Please enter timezone [GMT]: ^M Where is the filer located? []: ^M Enter the root directory for HTTP files [/home/http]: ^M Do you want to run DNS resolver? [n]: y Please enter DNS domain name []: metal.test You may enter up to 3 nameservers Please enter the IP address for first nameserver []: 172.16.11.10 Do you want another nameserver? [n]: ^M Do you want to run NIS client? [n]: ^M Sep 02 13:59:01 [localhost:asup.general.optout:debug]: This system will send event messages and weekly reports to NetApp Technical Support. To disable this feature, enter "options autosupport.support.enable off" within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution. This system will send event messages and weekly reports to NetApp Technical Support. To disable this feature, enter "options autosupport.support.enable off" within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, please see: http://now.netapp.com/autosupport/ Press the return key to continue. ^M The Shelf Alternate Control Path Management process provides the ability to recover from certain SAS shelf module failures and provides a level of availability that is higher than systems not using the Alternate Control Path Management process. Do you want to configure the Shelf Alternate Control Path Management interface for SAS shelves [n]: ^M
Now you’ll be prompted to set a password. Note: if you plan to set up CIFS later, this should be a temporary password only, as you’ll have to change it later and might not be able to use the same one.
Setting the administrative (root) password for adamantium ... New password:CPE1704TKS Retype new password:CPE1704TKS
At this point, the system will burble a bit more and finally present a login screen. Notice that the log messages now contain the system’s real hostname.
Sep 02 13:59:10 [adamantium:passwd.changed:info]: passwd for user 'root' changed. Sep 02 13:59:11 [adamantium:tar.csum.notFound:notice]: Stored checksum file does not exist, extracting local://tmp/prestage/mroot.tgz. Sep 02 13:59:11 [adamantium:tar.csum.mismatch.notice]: Stored checksum 0 does not match calculated checksum 2015235500, extracting local://tmp/prestage/mroot.tgz. Sep 02 13:59:26 [adamantium:tar.extract.success:info]: Completed extracting local://tmp/prestage/mroot.tgz. Sep 02 13:59:26 [adamantium:tar.csum.notFound:notice]: Stored checksum file does not exist, extracting local://tmp/prestage/pmroot.tgz. Sep 02 13:59:26 [adamantium:tar.csum.mismatch.notice]: Stored checksum 0 does not match calculated checksum 260110252, extracting local://tmp/prestage/pmroot.tgz. Sep 02 13:59:41 [adamantium:tar.extract.success:info]: Completed extracting local://tmp/prestage/pmroot.tgz. Tue Sep 2 13:59:41 GMT [rc:info]: Registry is being upgraded to improve storing of local changes. Sep 02 13:59:41 [adamantium:kern.syslog.msg:info]: Registry is being upgraded to improve storing of local changes. Tue Sep 2 13:59:41 GMT [rc:info]: Registry upgrade successful. Sep 02 13:59:41 [adamantium:kern.syslog.msg:info]: Registry upgrade successful. Sep 02 13:59:41 [adamantium:useradmin.added.deleted:info]: The role 'compliance' has been added. Sep 02 13:59:41 [adamantium:useradmin.added.deleted:info]: The role 'backup' has been modified. Sep 02 13:59:41 [adamantium:useradmin.added.deleted:info]: The group 'Backup Operators' has been modified. Sep 02 13:59:41 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'hostname'. The full command line is 'hostname adamantium'. Sep 02 13:59:42 [adamantium:dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'ifconfig'. The full command line is 'ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 mtusize 1500'. add net default: gateway 172.16.11.1 Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'route'. The full command line is 'route add default 172.16.11.1 1'. Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'routed'. The full command line is 'routed on'. Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'options'. The full command line is 'options dns.domainname metal.test'. Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'options'. The full command line is 'options dns.enable on'. Sep 02 13:59:42 [adamantium:kern.cli.cmd:debug]: Command line input: the command is 'options'. The full command line is 'options nis.enable off'. Sep 02 13:59:42 [adamantium:perf.archive.start:info]: Performance archiver started. Sampling 29 objects and 419 counters. SSH Server is not configured. Please use the command 'secureadmin setup ssh' to configure this server. Sep 02 13:59:42 [adamantium:mgr.opsmgr.autoreg.norec:warning]: No SRV records found for the System Manager server, or the server is not located on this subnet. Sep 02 13:59:42 [adamantium:mgr.boot.disk_done:info]: NetApp Release 8.1.2 7-Mode boot complete. Last disk update written at Thu Jan 1 00:00:00 GMT 1970 Can't set cifs branchcache server secret. Sep 02 13:59:42 [adamantium:httpd.config.mime.missing:warning]: /etc/httpd.mimetypes.sample file is missing. Sep 02 13:59:42 [adamantium:httpd.config.mime.missing:warning]: /etc/httpd.mimetypes.sample file is missing. Sep 02 13:59:42 [adamantium:httpd.config.mime.missing:warning]: /etc/httpd.mimetypes.sample file is missing. Sep 02 13:59:42 [adamantium:mgr.boot.reason_ok:notice]: System rebooted. Sep 02 13:59:42 [adamantium:callhome.reboot.unknown:info]: Call home for REBOOT CIFS is not licensed. (Use the "license" command to license it.) Sep 02 13:59:42 [adamantium:lmgr.dup.reclaim.locks:debug]: Aborting lock reclaims on volume: 'aggr0' initiated by: 'boot_grace_start' because of pending reclaims. Sep 02 13:59:42 [adamantium:ha.local.dbladeId:debug]: D-blade ID of the local node is 0f19063c-32a9-11e4-80cf-aff3bc2cf444. System initialization has completed successfully. Sep 02 13:59:42 [adamantium:secureadmin.ssh.setup.passed:info]: SSH setup is done and ssh2 is enabled. Host keys are stored in /etc/sshd/ssh_host_key, /etc/sshd/ssh_host_rsa_key, and /etc/sshd/ssh_host_dsa_key. Sep 02 13:59:42 [adamantium:unowned.disk.reminder:info]: 25 disks are currently unowned. Use 'disk assign' to assign the disks to a filer. Ipspace "acp-ipspace" created Sep 02 13:59:43 [adamantium:ip.drd.vfiler.info:info]: Although vFiler units are licensed, the routing daemon runs in the default IP space only. Tue Sep 2 20:59:47 UTC 2014 Password: Tue Sep 2 20:59:59 GMT [adamantium:tar.csum.notFound:notice]: Stored checksum file does not exist, extracting /mroot_late.tgz. Tue Sep 2 20:59:59 GMT [adamantium:tar.csum.mismatch.notice]: Stored checksum 0 does not match calculated checksum 0, extracting /mroot_late.tgz. Tue Sep 2 20:59:59 GMT [adamantium:tar.extract.success:info]: Completed extracting /mroot_late.tgz. Tue Sep 2 20:59:59 GMT [adamantium:tar.csum.notFound:notice]: Stored checksum file does not exist, extracting /pmroot_late.tgz. Tue Sep 2 20:59:59 GMT [adamantium:tar.csum.mismatch.notice]: Stored checksum 0 does not match calculated checksum 976743416, extracting /pmroot_late.tgz. Tue Sep 2 21:00:01 GMT [adamantium:kern.uptime.filer:info]: 9:00pm up 2 mins, 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iSCSI ops Tue Sep 2 21:00:02 GMT [adamantium:kern.time.conv.complete:notice]: Timekeeping configuration has been propagated. Tue Sep 2 21:00:05 GMT [adamantium:unowned.disk.reminder:info]: 25 disks are currently unowned. Use 'disk assign' to assign the disks to a filer. Tue Sep 2 21:00:36 GMT [adamantium:tar.extract.success:info]: Completed extracting /pmroot_late.tgz. Tue Sep 2 21:00:48 GMT [adamantium:callhome.performance.snap:info]: Call home for PERFORMANCE SNAPSHOT
At this point, the system is basically ready to use. If you wait just a few minutes more, you’ll see:
Wed Sep 3 00:13:17 GMT [adamantium:raid.rg.spares.low:warning]: /aggr0/plex0/rg0 Wed Sep 3 00:13:17 GMT [adamantium:callhome.spares.low:error]: Call home for SPARES_LOW Wed Sep 3 00:14:03 GMT [adamantium:monitor.globalStatus.nonCritical:warning]: There are not enough spare disks. Assign unowned disks.
Another ten minutes or so and ONTAP gets impatient, choosing to assign the unowned disks for you:
Wed Sep 3 00:23:50 GMT [adamantium:diskown.AutoAssign.NoOwner:warning]: Automatic assigning failed for disk v4.16 (S/N 20521300) because none of the disks on the loop are owned by any filer. Automatic assigning failed for all unowned disks on this loop. Wed Sep 3 00:23:50 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.19 (S/N 20911303) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:50 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.20 (S/N 20911304) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:50 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.21 (S/N 20911305) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:50 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.22 (S/N 20911306) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:50 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.24 (S/N 20911307) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.25 (S/N 20911308) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.26 (S/N 20911309) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.27 (S/N 20911310) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.28 (S/N 20911311) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.29 (S/N 20911312) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:23:52 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v5.32 (S/N 20911313) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 00:24:03 GMT [adamantium:monitor.globalStatus.ok:info]: The system's global status is normal.
Though it might not seem obvious, a Password: prompt appeared somewhere in the preceding mess, so you can just type the password you set above and press Enter. If you like to see a password prompt, you can use ^C or ^D to get one:
^C Password: CPE1704TKS Wed Sep 3 07:25:18 GMT [console_login_mgr:info]: root logged in from console adamantium>
When you’re finished, a ^D will log you out.
adamantium> ^Dconsole logout Password:
Now that the system is running, you can use SSH to connect remotely:
$ ssh root@172.16.11.51 root@172.16.11.51's password: adamantium>
As with a console login, ^D is used to disconnect.
Note that the ServerAliveInterval ssh setting causes complaints:
Wed Sep 3 18:25:44 GMT [adamantium:openssh.invalid.channel.req:warning]: SSH client (SSH-2.0-OpenSSH_6.2) from 172.16.11.1 sent unsupported channel request (10, env).
These are harmless, but irritating. Setting ServerAliveInterval=0 on the command line or in ~/.ssh/config will fix it:
$ ssh -oServerAliveInterval=0 root@172.16.11.51
Probably one of the first things to do is disable AutoSupport – you don’t really want your fictitious NetApp communicating with the mother ship, do you? At the console, do:
adamantium> options autosupport.enable off
If you want to check that the setting really took effect, you can:
adamantium> options autosupport.enable autosupport.enable off
Some time after you do this, you’ll likely get messages like:
Wed Sep 3 07:49:24 GMT [adamantium:asup.general.reminder:info]: AutoSupport is not configured to send to NetApp. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, please see: http://now.netapp.com/autosupport/
Obviously this is of no import.
In a production environment, it might make sense to set an automatic logout for security or logistical reasons; in a VM simulation, I find it irritating. There are four separate settings:
adamantium> options autologout autologout.console.enable on autologout.console.timeout 60 autologout.console.enable on autologout.console.timeout 60
Change these as you see fit, eg:
adamantium> options autologout.console.enable off adamantium> options autologout.telnet.enable off
Restarting is simple:
adamantium> reboot
You could add a delay if you want to restart later. The delay is given in minutes, eg:
adamantium> reboot -t 10
Shutting down the appliance is similarly straightforward:
adamantium> halt
I’ll begin by mentioning that I'm not some kind of NetApp expert, so take this whole section with a grain of salt. My goal is to get a very basic data volume so that I can play with CIFS and FPolicy; if you need a ‘fancier’ setup, you’re on your own.
If you logged in right away (ie before AutoAssign had a chance to do its thing), the disk configuration should look like this:
adamantium> disk show DISK OWNER POOL SERIAL NUMBER HOME ------------ ------------- ----- ------------- ------------- v5.17 adamantium(4055372820) Pool0 20102201 adamantium(4055372820) v5.18 adamantium(4055372820) Pool0 20102202 adamantium(4055372820) v5.16 adamantium(4055372820) Pool0 20102200 adamantium(4055372820) NOTE: Currently 25 disks are unowned. Use 'disk show -n' for additional information. adamantium> disk show -n DISK OWNER POOL SERIAL NUMBER HOME ------------ ------------- ----- ------------- ------------- v4.16 Not Owned NONE 20712200 v4.17 Not Owned NONE 20712201 v4.18 Not Owned NONE 20712202 v4.19 Not Owned NONE 20712203 v4.20 Not Owned NONE 20712204 v4.21 Not Owned NONE 20712205 v4.22 Not Owned NONE 20712206 v4.24 Not Owned NONE 20712207 v4.25 Not Owned NONE 20712208 v4.26 Not Owned NONE 20712209 v4.27 Not Owned NONE 20712210 v4.28 Not Owned NONE 20712211 v4.29 Not Owned NONE 20712212 v4.32 Not Owned NONE 20712213 v5.19 Not Owned NONE 20102203 v5.20 Not Owned NONE 20102204 v5.21 Not Owned NONE 20102205 v5.22 Not Owned NONE 20102206 v5.24 Not Owned NONE 20102207 v5.25 Not Owned NONE 20102208 v5.26 Not Owned NONE 20102209 v5.27 Not Owned NONE 20102210 v5.28 Not Owned NONE 20102211 v5.29 Not Owned NONE 20102212 v5.32 Not Owned NONE 20102213
If you wait a few minutes, though, the situation changes slightly:
adamantium> disk show DISK OWNER POOL SERIAL NUMBER HOME ------------ ------------- ----- ------------- ------------- v5.17 adamantium(4055372820) Pool0 20102201 adamantium(4055372820) v5.18 adamantium(4055372820) Pool0 20102202 adamantium(4055372820) v5.19 adamantium(4055372820) Pool0 20102203 adamantium(4055372820) v5.20 adamantium(4055372820) Pool0 20102204 adamantium(4055372820) v5.21 adamantium(4055372820) Pool0 20102205 adamantium(4055372820) v5.22 adamantium(4055372820) Pool0 20102206 adamantium(4055372820) v5.24 adamantium(4055372820) Pool0 20102207 adamantium(4055372820) v5.25 adamantium(4055372820) Pool0 20102208 adamantium(4055372820) v5.26 adamantium(4055372820) Pool0 20102209 adamantium(4055372820) v5.27 adamantium(4055372820) Pool0 20102210 adamantium(4055372820) v5.28 adamantium(4055372820) Pool0 20102211 adamantium(4055372820) v5.29 adamantium(4055372820) Pool0 20102212 adamantium(4055372820) v5.32 adamantium(4055372820) Pool0 20102213 adamantium(4055372820) v5.16 adamantium(4055372820) Pool0 20102200 adamantium(4055372820) NOTE: Currently 14 disks are unowned. Use 'disk show -n' for additional information.
You can then assign another unowned disk to the filer:
adamantium> disk assign v4.16 Wed Sep 3 08:51:43 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.16 (S/N 20712200) from unowned (ID 4294967295) to adamantium (ID 4055372820)
Or assign the rest – why not?
adamantium> disk assign all Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.17 (S/N 20712201) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.18 (S/N 20712202) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.19 (S/N 20712203) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.20 (S/N 20712204) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.21 (S/N 20712205) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.22 (S/N 20712206) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.24 (S/N 20712207) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.25 (S/N 20712208) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.26 (S/N 20712209) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.27 (S/N 20712210) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.28 (S/N 20712211) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.29 (S/N 20712212) from unowned (ID 4294967295) to adamantium (ID 4055372820) Wed Sep 3 08:56:51 GMT [adamantium:diskown.changingOwner:info]: changing ownership for disk v4.32 (S/N 20712213) from unowned (ID 4294967295) to adamantium (ID 4055372820)
By default, the system comes with one aggregate already defined, aggr0:
adamantium> aggr status -R Aggregate aggr0 (online, raid_dp) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal, block checksums) RAID Disk Device Model Number Serial Number VBN Start VBN End --------- ------ ------------ ------------- --------- ------- dparity v5.16 VD-1000MB-FZ-520 20102200 - - parity v5.17 VD-1000MB-FZ-520 20102201 - - data v5.18 VD-1000MB-FZ-520 20102202 0 255999 adamantium> df -A Aggregate kbytes used avail capacity aggr0 921600 890792 30808 97% aggr0/.snapshot 0 13800 0 ---%
You could create a tiny volume in the remaning space of that aggregate. But let’s add another disk, v5.19, instead:
adamantium> aggr add aggr0 -d v5.19 Wed Sep 3 19:05:08 GMT [adamantium:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/v5.19 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20102203] to aggregate aggr0 has completed successfully Addition of 1 disk to the aggregate has completed.
That looked promising. Let’s check whether that had any impact on its size:
adamantium> aggr status -R Aggregate aggr0 (online, raid_dp) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal, block checksums) RAID Disk Device Model Number Serial Number VBN Start VBN End --------- ------ ------------ ------------- --------- ------- dparity v5.16 VD-1000MB-FZ-520 20102200 - - parity v5.17 VD-1000MB-FZ-520 20102201 - - data v5.18 VD-1000MB-FZ-520 20102202 0 255999 data v5.19 VD-1000MB-FZ-520 20102203 256000 511999 adamantium> df -A Aggregate kbytes used avail capacity aggr0 1843200 891652 951548 48% aggr0/.snapshot 0 14628 0 ---%
Perfect, aggr0 has twice as much space as before.
Using the aggr status -s command, one can see the available spare disks.
adamantium> aggr status -s Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block checksum spare v4.16 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.17 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.18 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.19 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.20 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.21 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.22 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.24 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.25 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.26 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.27 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.28 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.29 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v4.32 v4 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.20 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.21 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.22 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.24 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.25 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.26 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.27 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.28 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.29 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448 spare v5.32 v5 ? ? FC:B - FCAL 15000 1020/2089984 1027/2104448
Lots of spares available here. Let’s take a few of those v4 disks and create a new aggregate, aggr1 with them:
adamantium> aggr create aggr1 -d v4.16 v4.17 v4.18 Wed Sep 3 21:38:36 GMT [adamantium:raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.18 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20712202] to aggregate aggr1 has completed successfully Wed Sep 3 21:38:36 GMT [adamantium:raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.17 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20712201] to aggregate aggr1 has completed successfully Wed Sep 3 21:38:36 GMT [adamantium:raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.16 Shelf ? Bay ? [NETAPP VD-1000MB-FZ-520 0042] S/N [20712200] to aggregate aggr1 has completed successfully Creation of an aggregate with 3 disks has completed. Wed Sep 3 21:38:37 GMT [adamantium:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr1' UUID 'a8ddbab8-33b2-11e4-82ae-123478563412' was built in 0 msec, after scanning 0 inodes and restarting -1 times with a final result of starting. Wed Sep 3 21:38:37 GMT [adamantium:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr1' UUID 'a8ddbab8-33b2-11e4-82ae-123478563412' was built in 0 msec, after scanning 0 inodes and restarting 0 times with a final result of success. Wed Sep 3 21:38:37 GMT [adamantium:wafl.vol.add:notice]: Aggregate aggr1 has been added to the system.
Might be a good idea to check whether that worked:
adamantium> aggr status aggr1 -R Aggregate aggr1 (online, raid_dp) (block checksums) Plex /aggr1/plex0 (online, normal, active) RAID group /aggr1/plex0/rg0 (normal, block checksums) RAID Disk Device Model Number Serial Number VBN Start VBN End --------- ------ ------------ ------------- --------- ------- dparity v4.16 VD-1000MB-FZ-520 20712200 - - parity v4.17 VD-1000MB-FZ-520 20712201 - - data v4.18 VD-1000MB-FZ-520 20712202 0 255999
Once you have the aggregate created, this is a fairly straightforward process; just decide how much space you want to allocate to a volume. First, let’s create a volume on the existing aggregate, aggr0. I intentionally guess the size too high, just to see how big it will let me create the volume.
adamantium> vol create vol1 aggr0 1g vol create: Request to create volume 'vol1' failed because there is not enough space in the given aggregate. Either create 87.8MB of free space in the aggregate or select a size of at most 936MB for the new volume. adamantium> vol create vol1 aggr0 896m Creation of volume 'vol1' with size 896m on containing aggregate 'aggr0' has completed.
And again for the other aggregate, aggr1:
adamantium> vol create vol2 aggr1 1g vol create: Request to create volume 'vol2' failed because there is not enough space in the given aggregate. Either create 132MB of free space in the aggregate or select a size of at most 893MB for the new volume. adamantium> vol create vol2 aggr1 893m Creation of volume 'vol2' with size 893m on containing aggregate 'aggr1' has completed.
Let’s see how that turned out. (The -x switch to df hides the .snapshot lines.)
adamantium> df -x Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 828324 140552 687772 17% /vol/vol0/ /vol/vol1/ 871632 132 871500 0% /vol/vol1/ /vol/vol2/ 868712 132 868580 0% /vol/vol2/
The NetApp simulator comes with several products already licensed. To get a list, you can use the license command:
adamantium> license a_sis ENABLED cf not licensed cf_remote not licensed cifs not licensed compression ENABLED disk_sanitization ENABLED fcp not licensed flash_cache ENABLED flex_clone not licensed flex_scale ENABLED flexcache_nfs ENABLED http ENABLED insight_balance not licensed iscsi not licensed multistore ENABLED nearstore_option ENABLED nfs not licensed operations_manager ENABLED persistent_archive ENABLED protection_manager ENABLED provisioning_manager ENABLED smdomino not licensed smsql not licensed snapdrive_windows not licensed snaplock not licensed snaplock_enterprise not licensed snapmanager_hyperv not licensed snapmanager_oracle not licensed snapmanager_sap not licensed snapmanager_sharepoint not licensed snapmanager_vi not licensed snapmanagerexchange not licensed snapmirror not licensed snapmirror_sync not licensed snapmover ENABLED snaprestore not licensed snapvalidator not licensed storage_services ENABLED sv_application_pri not licensed sv_linux_pri ENABLED sv_ontap_pri not licensed sv_ontap_sec not licensed sv_unix_pri ENABLED sv_vi_pri ENABLED sv_windows_ofm_pri ENABLED sv_windows_pri ENABLED syncmirror_local not licensed v-series not licensed vld ENABLED
But without any licensed protocols (cifs, fcp, iscsi, nfs), this filer won’t be very useful.
Licences may be obtained from the same NetApp site where you downloaded the simulator. Once you have that, adding the licence is fairly straightforward. Here’s an example for cifs:
adamantium> license add DZDACHD A cifs site license has been installed. Run cifs setup to enable cifs. Thu Sep 4 20:44:13 GMT [telnet_0:notice]: cifs licensed
Note that it’s a site licence, so it should be good for multiple filers.
Once CIFS has been licensed, it needs to be configured. Below is an example configuration. Note that the domain name (eg metal.test) will be entered already if, during the initial configuration, you set up the resolver using the same domain name.
adamantium> cifs setup This process will enable CIFS access to the filer from a Windows(R) system. Use "?" for help at any prompt and Ctrl-C to exit without committing changes. Your filer does not have WINS configured and is visible only to clients on the same subnet. Do you want to make the system visible via WINS? [n]: ^M A filer can be configured for multiprotocol access, or as an NTFS-only filer. Since NFS, DAFS, VLD, FCP, and iSCSI are not licensed on this filer, we recommend that you configure this filer as an NTFS-only filer (1) NTFS-only filer (2) Multiprotocol filer Selection (1-2)? [1]: ^M CIFS requires local /etc/passwd and /etc/group files and default files will be created. The default passwd file contains entries for 'root', 'pcuser', and 'nobody'. Enter the password for the root user []: natoar23ae Retype the password: natoar23ae The default name for this CIFS server is 'ADAMANTIUM'. Would you like to change this name? [n]: ^M Data ONTAP CIFS services support four styles of user authentication. Choose the one from the list below that best suits your situation. (1) Active Directory domain authentication (Active Directory domains only) (2) Windows NT 4 domain authentication (Windows NT or Active Directory domains) (3) Windows Workgroup authentication using the filer's local user accounts (4) /etc/passwd and/or NIS/LDAP authentication Selection (1-4)? [1]: ^M What is the name of the Active Directory domain? [metal.test]: In Active Directory-based domains, it is essential that the filer's time match the domain's internal time so that the Kerberos-based authentication system works correctly. If the time difference between the filer and the domain controllers is more than 5 minutes, authentication will fail. Time services are currently not configured on this filer. Would you like to configure time services? [y]: ^M CIFS Setup will configure basic time services. To continue, you must specify one or more time servers. Specify values as a comma or space separated list of server names or IPv4 addresses. In Active Directory-based domains, you can also specify the fully qualified domain name of the domain being joined (for example: "METAL.TEST"), and time services will use those domain controllers as time servers. Enter the time server host(s) and/or address(es) [METAL.TEST]: ^M Would you like to specify additional time servers? [n]: ^M In order to create an Active Directory machine account for the filer, you must supply the name and password of a Windows account with sufficient privileges to add computers to the METAL.TEST domain. Enter the name of the Windows user [Administrator@METAL.TEST]: ^M Password for Administrator@METAL.TEST: hunter2 CIFS - Logged in as Administrator@METAL.TEST.
If you already have a machine with the same name in AD, you’ll be prompted to overwrite it:
An account that matches the name 'ADAMANTIUM' already exists in Active Directory: 'cn=adamantium,cn=computers,dc=metal,dc=test'. This is normal if you are re-running CIFS Setup. You may continue by using this account or changing the name of this CIFS server. Do you want to re-use this machine account? [y]: ^M
Continuing along; note that, after confirming, the password for the root user is changed.
Mon Sep 8 18:47:54 GMT [adamantium:passwd.changed:info]: passwd for user 'root' changed. Mon Sep 8 18:47:54 GMT [adamantium:wafl.quota.sec.change:notice]: security style for /vol/vol2/ changed from unix to ntfs Mon Sep 8 18:47:54 GMT [adamantium:wafl.quota.sec.change:notice]: security style for /vol/vol0/ changed from unix to ntfs Mon Sep 8 18:47:54 GMT [adamantium:wafl.quota.sec.change:notice]: security style for /vol/vol1/ changed from unix to ntfs Mon Sep 8 18:48:31 GMT [adamantium:callhome.weekly:info]: Call home for WEEKLY_LOG Mon Sep 8 18:48:31 GMT [adamantium:asup.general.reminder:info]: AutoSupport is not configured to send to NetApp. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, please see: http://now.netapp.com/autosupport/ CIFS - Starting SMB protocol... It is highly recommended that you create the local administrator account (ADAMANTIUM\administrator) for this filer. This account allows access to CIFS from Windows when domain controllers are not accessible. Mon Sep 8 18:48:33 GMT [adamantium:kern.log.rotate:notice]: System adamantium (ID 4055372820) is running 8.1.2 Do you want to create the ADAMANTIUM\administrator account? [y]: ^M Enter the new password for ADAMANTIUM\administrator: natoar23ae Retype the password: natoar23ae Currently the user "ADAMANTIUM\administrator" and members of the group "METAL\Domain Admins" have permission to administer CIFS on this filer. You may specify an additional user or group to be added to the filer's "BUILTIN\Administrators" group, thus giving them administrative privileges as well. Mon Sep 8 18:48:56 GMT [adamantium:nbt.nbns.registrationComplete:info]: NBT: All CIFS name registrations have completed for the local server. Would you like to specify a user or group that can administer CIFS? [n]: ^M Welcome to the METAL.TEST (METAL) Active Directory(R) domain. CIFS local server is running. Mon Sep 8 18:49:07 GMT [adamantium:auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for METAL.TEST. Mon Sep 8 18:49:07 GMT [adamantium:auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found 1 AD LDAP server addresses using DNS site query (Default-First-Site-Name). Mon Sep 8 18:49:07 GMT [adamantium:auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found 1 AD LDAP server addresses using generic DNS query. Mon Sep 8 18:49:07 GMT [adamantium:auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- AD LDAP server address discovery for METAL.TEST complete. 1 unique addresses found.
You should be able to access basic information about the filer now, eg:
C:\Users\Administrator.METAL>net view \\adamantium Shared resources at \\adamantium Share name Type Used as Comment ------------------------------------------------------------------------------- HOME Disk Default Share The command completed successfully.
Probably you’d like to create a user-accessible share once CIFS is set up. To create a share:
mithril> cifs shares -add data /vol/vol1
That was easy. To check its status:
mithril> cifs shares data Name Mount Point Description ---- ----------- ----------- data /vol/vol1 everyone / Full Control
By default, the share permissions (not NTFS permissions) are everyone / Full Control. (Use cifs access to change that.)
If you can’t recall which volumes you have available to share, use the vol status or df command:
mithril> vol status Volume State Status Options vol1 online raid_dp, flex 64-bit vol0 online raid_dp, flex root 64-bit mithril> df -x Filesystem kbytes used avail capacity Mounted on /vol/vol1/ 1992296 1484 1990812 0% /vol/vol1/ /vol/vol0/ 828324 157080 671244 19% /vol/vol0/
Modifying local groups (eg Administrators, Backup Operators, …) can be somewhat awkward and non-intuitive. Below are some examples to get you started.
Perhaps surprisingly, the best way to do this is via the Local Users and Groups MMC snap-in (lusrmgr.msc) on some Windows box – just point it at the filer. (Yes, this really works!) But it is possible to do this via the NetApp console, too.
To check the current group members:
mithril> useradmin group list Administrators -u Name: administrator Info: Built-in account for administering the filer Rid: 500 Groups: Administrators
Or another way:
mithril> useradmin user list -g Administrators Name: administrator Info: Built-in account for administering the filer Rid: 500 Groups: Administrators
Unfortunately, both of these only list users, not groups.
To list all members of a local group:
mithril> useradmin domainuser list -g Administrators List of SIDS in Administrators S-1-5-21-550356625-976087955-1409059158-500 S-1-5-21-2334459736-1525413079-3213713954-512 S-1-5-21-1456928772-1296060132-2560808083-512 For more information about a user, use the 'cifs lookup' and 'useradmin user list' commands.
Golly, that’s not terribly helpful, either. (RID 500 is the local Administrator and 512 is Domain Admins, but you won’t always luck out like this.)
To look up these SIDs:
mithril> cifs lookup S-1-5-21-2334459736-1525413079-3213713954-512 name = DEV\Domain Admins
It would get really tiring to do that for a long list of SIDs. (Now you know why I suggested lusrmgr.)
The fastest way I found to extract this information using a command-line interface is via some PowerShell magic. Replace mithril below with your filer hostname:
PS C:\> $group = [ADSI]'WinNT://mithril/Administrators' PS C:\> @($group.Invoke('Members')) |% {$_.GetType().InvokeMember('ADsPath', 'GetProperty', $null, $_, $null)} WinNT://DEV/mithril/administrator WinNT://DEV/Domain Admins WinNT://METAL/Domain Admins
Kind of ugly, isn’t it? But it works.
Compared to listing group membership, this is actually quite straightforward:
mithril> useradmin domainuser add joshua -g Administrators SID = S-1-5-21-1456928772-1296060132-2560808083-1164 Domain User <joshua> successfully added to Administrators.
Similar to adding a domain user to a local group, removing them is just as easy:
mithril> useradmin domainuser delete joshua -g Administrators SID = S-1-5-21-1456928772-1296060132-2560808083-1164 Domain User <joshua> successfully deleted from Administrators.
Here is an example FPolicy configuration. This particular example is for Varonis DatAdvantage.
adamantium> fpolicy create Varonis screen File policy Varonis created successfully. adamantium> fpolicy options Varonis required off adamantium> fpolicy options Varonis cifs_setattr on adamantium> fpolicy options Varonis monitor_ads on adamantium> fpolicy options Varonis cifs_disconnect_check on adamantium> fpolicy enable Varonis File policy Varonis (file screening) is enabled. Mon Sep 8 21:10:16 GMT [adamantium:fpolicy.fscreen.enable:info]: FPOLICY: File policy Varonis (file screening) is enabled.
An explanation of the above, in order:
You can check the options on the policy:
adamantium> fpolicy options Varonis fpolicy options Varonis required: off fpolicy options Varonis cifs_setattr: on fpolicy options Varonis reqcancel_timeout: 0 secs (disabled) fpolicy options Varonis serverprogress_timeout: 0 secs (disabled) fpolicy options Varonis cifs_disconnect_check: on Secondary file screening servers IP address list: No secondary file screening servers list fpolicy options Varonis monitor_ads: off
After the policy is enabled, you can instruct Varonis to begin collecting events from the NetApp filer. If successful, an event will be logged on the filer:
Mon Sep 8 21:25:39 GMT [adamantium:fpolicy.fscreen.server.connecting.successful:info]: FPOLICY: File policy server \\SERVER3 registered with the filer as a server for policy Varonis successfully.
And you can examine the FPolicy status:
adamantium> fpolicy CIFS file policy is enabled. File policy Varonis (file screening) is enabled. File screen servers P/S Connect time (dd:hh:mm) Reqs Fails ------------------------------------------------------------------------------ 172.16.11.30 \\SERVER3 Pri 00:00:02 0 0 ServerID: 1 IDL Version: 1 SMB Request Pipe Name: \ntapfprq Options enabled: async, version2, size_and_owner Operations monitored: File create,File rename,File delete,File read,File write,Setattr Directory rename,Directory delete,Directory create Above operations are monitored for CIFS only List of extensions to screen: ??? List of extensions not to screen: Extensions-not-to-screen list is empty. Number of requests screened : 0 Number of screen failures : 0 Number of requests blocked locally : 0
If you disconnect the file screen server, you’ll see:
Mon Sep 8 21:55:38 GMT [adamantium:fpolicy.fscreen.server.connecting.disconnect:info]: FPOLICY: File policy server \\SERVER3 for policy Varonis deregistered and will be removed from the list of available file screen servers. Mon Sep 8 21:55:38 GMT [adamantium:cifs.server.infoMsg:info]: CIFS: Warning for server \\SERVER3: Connection terminated. Mon Sep 8 21:55:38 GMT [adamantium:fpolicy.fscreen.server.droppedConn:warning]: FPOLICY: File policy server 172.16.11.30 for fscreen policy Varonis has disconnected from the filer. Mon Sep 8 21:55:38 GMT [adamantium:fpolicy.fscreen.server.connectedNone:warning]: FPOLICY: File policy Varonis (file screening) is enabled but no servers are connected to perform file screening for this policy.
Due to the way FPolicy is designed, the NetApp filer actually connects to a SMB Named Pipe (\ntapfprq) on a Windows server (“file screen server”, or ”file policy server”) to send events. This works fine when the filer and Windows server are on the same domain, as the filer can authenticate using its machine$ account.
However, when they’re on different domains, the filer authenticates as the anonymous user. This requires some security configuration on the Windows server:
Policy | Security Setting |
---|---|
Network access: Let Everyone permissions apply to anonymous users | Enabled |
Network access: Named pipes that can be accessed anonymously | Browser,ntapfprq |
Some of these settings might already be set in Group Policy; if so, you’ll need to change them there.
If you fail to set this option, the NetApp will likely report:
Mon Sep 8 22:20:39 GMT [mithril:fpolicy.fscreen.server.connecting.successful:info]: FPOLICY: File policy server \\SERVER3 registered with the filer as a server for policy Varonis successfully. Mon Sep 8 22:21:12 GMT [mithril:cifs.pipe.errorMsg:error]: CIFS: Error on named pipe with SERVER3: Error connecting to server, open pipe failed Mon Sep 8 22:21:12 GMT [mithril:cifs.server.infoMsg:info]: CIFS: Warning for server \\SERVER3: Connection terminated. Mon Sep 8 22:21:12 GMT [mithril:fpolicy.fscreen.server.connectError:error]: FPOLICY: An attempt to connect to fpolicy server 172.16.11.30 for policy Varonis failed [0xc0000022]. Mon Sep 8 22:21:12 GMT [mithril:fpolicy.fscreen.server.droppedConn:warning]: FPOLICY: File policy server 172.16.11.30 for fscreen policy Varonis has disconnected from the filer. Mon Sep 8 22:21:12 GMT [mithril:fpolicy.fscreen.server.connectedNone:warning]: FPOLICY: File policy Varonis (file screening) is enabled but no servers are connected to perform file screening for this policy. [...snip...] Mon Sep 8 22:21:45 GMT [mithril:fpolicy.fscreen.server.connecting.successful:info]: FPOLICY: File policy server \\SERVER3 registered with the filer as a server for policy Varonis successfully. Mon Sep 8 22:22:17 GMT [mithril:cifs.pipe.errorMsg:error]: CIFS: Error on named pipe with SERVER3: Error connecting to server, open pipe failed Mon Sep 8 22:22:17 GMT [mithril:cifs.server.infoMsg:info]: CIFS: Warning for server \\SERVER3: Connection terminated. Mon Sep 8 22:22:17 GMT [mithril:fpolicy.fscreen.server.connectError:error]: FPOLICY: An attempt to connect to fpolicy server 172.16.11.30 for policy Varonis failed [0xc0000022]. Mon Sep 8 22:22:17 GMT [mithril:fpolicy.fscreen.server.droppedConn:warning]: FPOLICY: File policy server 172.16.11.30 for fscreen policy Varonis has disconnected from the filer. Mon Sep 8 22:22:17 GMT [mithril:fpolicy.fscreen.server.connectedNone:warning]: FPOLICY: File policy Varonis (file screening) is enabled but no servers are connected to perform file screening for this policy.
What is happening here?
For a more technical explanation about the impact of these security settings, see [2].
It’s pretty obvious, based on the boot sequence, that Data ONTAP® is based, at least in part, on FreeBSD. Wouldn’t it be neat to get a real BSD shell? Well, you can.
The commands dealing with the shell are privileged. Therefore, before continuing, elevate to the advanced privilege level:
mithril> priv set advanced Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. mithril*>
Note that the prompt ends with an asterisk (*) when you’re at the advanced privilege level.
After you’re done messing around with the shell, return to the normal level, admin:
mithril*> priv set admin mithril>
A prerequisite for the systemshell is enabling the the diag user and setting its password. By default, this account is locked, and its password is unset:
mithril*> useradmin diaguser show Name: diag Info: Account for access to systemshell Locked: yes
First, set the password. Note this is not done with the passwd utility, but with useradmin:
mithril*> useradmin diaguser password Please enter a new password: Tr0ub4dor&3 Please enter it again: Tr0ub4dor&3 Mon Sep 8 23:55:57 GMT [mithril:passwd.changed:info]: passwd for user 'diag' changed.
Then unlock the account:
mithril*> useradmin diaguser unlock
When you’re done, it’s probably best practice to lock the account again:
mithril*> useradmin diaguser lock
You can’t just login directly using the diag user. First, login normally (eg with the root account), enable advanced privileges, then execute the systemshell command.
$ ssh root@172.16.11.7 root@172.16.11.7's password: natoar23ae mithril> priv set advanced Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. mithril*> systemshell Data ONTAP/amd64 (mithril) (ttyp0) login: diag Password: Tr0ub4dor&3 Warning: The system shell provides access to low-level diagnostic tools that can cause irreparable damage to the system if not used properly. Use this environment only when directed to do so by support personnel. mithril%
Wow, a shell! (tcsh, in case you’re wondering – the % prompt is a hint, but the output of set is definitive.)
Now, do whatever you like, remembering that any changes you make will probably fry your NetApp forever. Eg:
mithril% w 12:10AM up 1:56, 1 user, load averages: 1.42, 1.28, 1.21 USER TTY FROM LOGIN@ IDLE WHAT diag p0 localhost 12:05AM - w mithril% uname -a Data ONTAP mithril 8.1.2 Data ONTAP Release 8.1.2 amd64 mithril% ps PID TT STAT TIME COMMAND 2474 p0 S 0:00.01 -csh (csh) 2485 p0 R+ 0:00.00 ps
Note: Don’t exit systemshell with ^D. The ^D actually gets intercepted by the ‘outer’ shell (ie the usual NetApp console shell); this will disconnect your session completely, leaving a stale csh running forever. Worse, it can occasionally confuse the hell out of the SSH forwarder, locking out remote access.
The best way to get back to the console is exit:
mithril% exit logout mithril*>
At this point, you could lock the diag user and shed your excess privileges, as suggested earlier:
mithril*> useradmin diaguser lock mithril*> priv set mithril>
Note that priv set on its own is an abbreviation for priv set admin.
Feel free to contact me with any questions, comments, or feedback.