Quantcast
Channel: Mac Operations
Viewing all 42 articles
Browse latest View live

AutoPkg: Crowd-sourcing Mac packaging and deployment

$
0
0

Thanks to everyone who attended Greg’s and my talk today at MacSysAdmin 2014 in Göteborg! Again, I give my sincere thanks to Tycho and all those who organize the fantastic MacSysAdmin conference every year. I’m honoured to be among such great company, speakers and attendees.

Here are the links for resources that were mentioned in the slides:

AutoPkg:

Getting support:

Other tools:

Work by Shea Craig:

AutoPkg integration for other software management systems:

Previous video sessions by Greg:

Articles on AutoPkg in MacTech Magazine, print and iPad:

  • Issues October, November 2013 (intros to AutoPkg)
  • Issues 2014.342, 2014.343 (writing custom Processors)

Using Charles Web Proxy to figure out the built-in updaters in Mac apps:


How Do I Contribute? MacTech 2014 presentation links

$
0
0

For everyone at the MacTech Conference in Los Angeles this year, here are links to various resources that are linked and referred to in my talk today on Git and source code collaboration.

Source Control Management:

Learning resources:

GUI Applications:

GitHub:

Git tricks:

Editor tools:

Miscellaneous:

MacTech Deployment Discussion/BOF/Q&A Notes

$
0
0

At MacTech Conference 2014 in Los Angeles, Graham Gilbert and myself conducted a discussion / birds-of-a-feather session on the broad topic of OS and software deployment for OS X and iOS.

Allister Banks was present and dutifully took notes and reference URLs of specifics that were mentioned – solutions, blog posts, and other resources. We thought these would be great to share:

The most up-to-date version of the notes are here, and inline below:

More about suppressing diagnostics submissions popups in OS X Yosemite

$
0
0

With OS X Yosemite, Apple added an additional phase to the Setup Assistant: the offer to submit diagnostics info to Apple and third-party developers, which is displayed either as part of a initial setup or upon first login (similar to the iCloud prompt).

yosemite_diagnostics

Those who administer OS X clients typically look to disable such prompts on managed machines, either to avoid annoying users in shared workstation environments or because the organization may not (or may) wish to provide diagnostics information to Apple and third-party developers.

Both Rich Trouton and myself have documented what seemed to be an additional preference key that could be configured in the com.apple.SetupAssistant domain: LastSeenBuddyBuildVersion. However, with the release of OS X 10.10.1 on November 17, some admins reported seeing this dialog pop up again, and then that it might be possible to suppress by updating this new key with the updated build number of OS X 10.10.1, 14B25.

Furthermore, whether it would show up seemed it may depend on whether the user is an admin or not. If the user was not an admin, the setup assistant window would still show but would simply show the “Setting Up Your Mac..” animation that plays at the end of the setup assistant process.

yosemite_setting_up

Back when Yosemite was available only as developer previews, Rich had already documented on the Apple dev forums a process that seemed to disable this diagnostics prompt. This involves writing additional keys to a file at /Library/Application Support/CrashReporter/DiagnosticMessagesHistory.plist. In my testing, unchecking both checkboxes (Apple and app developers) for diagnostic submissions results in at least the following keys getting set in this plist:

<key>AutoSubmitVersion</key>
<integer>4</integer>
<key>AutoSubmit</key>
<false/>
<key>ThirdPartyDataSubmitVersion</key>
<integer>4</integer>
<key>ThirdPartyDataSubmit</key>
<false/>

I looked again at whether this was still something that comes into play given this most recent 10.10.1 update. Digging through the binary at /System/Library/CoreServices/SubmitDiagInfo seems to suggest it is, with logging messages like: Diagnostic message history store was not writeable. Will not submit diagnostic messsages, admin user was unable to write into diagnostic message history, and methods that determine whether the authenticated user is an admin user. This all confirms that the service managing the diagnostic messages expects that admin users can write directly to this file (and indeed, systems I’ve seen all set this file to have read/write access for the admin group).

I’ve since performed tests deploying an new, unbooted 10.10.1 image that contains no LastSeenBuddyBuildVersion key in com.apple.SetupAssistant, where in previous Yosemite testing I’d been setting this key via a Configuration Profile.

So as far as I can tell, it may be enough to suppress this diagnostics prompt using only a DiagnosticMessagesHistory.plist file placed at /Library/Application Support/CrashReporter/DiagnosticMessagesHistory.plist, containing the above four keys. I’ve tested deploying this file within an image (built with AutoDMG) using a standard installer package with no scripts.

One could also apply these plist keys to a booted system (using Munki, for example) using a script like the following. Note the lack of the "$3" variable, meaning this script would not apply to non-booted volumes if run within a postinstall script. This script actually leaves the defaults as suggested by Apple, so tweak as desired – the objective here is to set them to something so that this phase of the Setup Assistant does not show.

I consider this all still speculative. Rich Trouton has (also today) documented an alternate approach to suppressing this diagnostics dialog. My theory at this time of writing is that while perhaps updating the setting for LastSeenBuddyBuildVersion in the Setup Assistant prevents these additional screens from showing, it’s not what is actually determining the behavior of the diagnostics reporting mechanism.

Keeping your OS X VM guest tools current with Munki

$
0
0

fusion_256.pngI use VMware Fusion to test client software, deployment workflows, and using virtual machines allows me to frequently take and roll back snapshots. Over time, the VMware guest OS tools tend to drift out of date with the version of Fusion, and are reported to need updates/reinstalling. Sometimes when this happens, things like pasteboard synchronization, automatic window resolution resizing and drag-and-drop file transfers stop working. I’d like to not have to manually click “Update VMware tools..” and go install the tools manually every time I notice the tools are out of date between snapshots (which on my system seems to be frequently).

Luckily, I use Munki to manage OS X clients, and it’s great at updating software. In this post I’ll walk through the few steps I did to have all my test machines configured to automatically keep their VMware tools up to date. The same logic should apply for other software management platforms like Casper, Absolute Manage or Puppet, using their respective mechanisms for customizable discoverable attributes. This technique should work for users of Parallels, if they use a sane OS X installer for their tools. VirtualBox has yet to ship with any OS X guest tools.

There are a few pieces involved in setting this up:

  1. Importing the guest tools installer into our software repo.
  2. Configuring this item as an install for our test machine or group of machines.
  3. If we want to do this in a smart way, we can also have the management system install these tools only if the client is actually a VMware Fusion VM. By doing this, we don’t have to explicitly set tools to install on specific machines, and instead let Munki do this conditionally. Munki’s mechanism for this is called conditional items.

The version of Fusion I’m using at the time of writing this is 7.0.1.

Importing the guest tools

VMware Fusion keeps its various guest tools in /Applications/VMware Fusion.app/Contents/Library/isoimages. OS X tools, of course, live in the darwin.iso disk image. If this disk image doesn’t exist, you may be on a system that has never created an OS X guest and whose VMware Fusion distribution didn’t come with tools included. VMware Fusion will download the guest tools as needed if this is the case, so create a dummy OS X VM just to get Fusion to download the tools.

We can open this up and see it’s got a plain installer:

Screenshot 2014-11-19 07.52.15

This isn’t actually an installer, it’s an application disguised as an installer. It seems to be a shim app that simply loads an embedded installer package using the standard Installer app. I’m not really sure why this exists – but it does. Luckily, if we dig into this bundle (right-click and “Show package contents”), we’ll find the actual installer at Contents/Resources/VMware Tools.pkg. This is what we want to tell Munki to import. We can use munkiimport to import this disk image interactively.

(An easter egg: If you have a VMware Fusion 6.0.x app bundle laying around, take a closer look at the folders at the top level of the mounted darwin.iso image. It contains a small set of additional bootloader files and drivers, some of which are used for legacy DVD-based installs.)

When Munki installs from items on a disk image, it can look for these items at arbitrary paths, meaning we can import the vendor iso file directly. This saves us the small step of needing to first mount the disk image and locate the real package, and we can more easily automate this process. Being able to use the vendor installer directly for almost all software is one of Munki’s many strong points as far as cutting down on tedious repo management, especially for cases where an application may have multiple installer pkgs within a single DMG (Autodesk Maya, for example).

Here’s the output of munkiimport (note the use of the -p option to specify the path to our installer, which makepkginfo will use to set the package_path key in the pkginfo plist file.

➜  ~ munkiimport -p "Install VMware Tools.app/Contents/Resources/VMware Tools.pkg" darwin.iso
      Item name [VMware Tools]: VMwareTools
   Display name [VMware Tools]:
    Description: Guest tools for VMware Fusion OS X VMs.
        Version [9.8.4]:
       Category: Developer Tools
      Developer: VMware
       Catalogs [testing]:

Import this item? [y/n] y
Upload item to subdirectory path []: support/VMware
Path /Volumes/munki_repo/pkgs/support/VMware doesn't exist. Create it? [y/n] y
No existing product icon found.
Attempt to create a product icon? [y/n] y
Attempting to extract and upload icon...
Created icon: /Volumes/munki_repo/icons/VMwareTools.png
Copying darwin.iso to /Volumes/munki_repo/pkgs/support/VMware/darwin-9.8.4.iso...
Saving pkginfo to /Volumes/munki_repo/pkgsinfo/support/VMware/VMwareTools-9.8.4...

In this case, I’ve accepted the most of the defaults, given it the item name of VMwareFusion and put it in a repo subfolder of support/VMware.

Recent versions of this tools installer seem to use a sane package identifier and version number, so I haven’t needed to make additional changes to the Munki pkginfo. If you do wish to add an additional installs array to your pkginfo, one place where you will find components that are likely to be unique for each tools version are the kernel extension bundles in /Library/Application Support/VMware Tools/, for example the vmhgfs.kext bundle. These tend to also have the same CFBundleShortVersionString as the tools installer packages.

Note that the package version of 9.8.4 isn’t very meaningful to us, so you may wish to change the pkginfo’s version key to 7.0.1 just so that you know which tools package is for which version of Fusion. They do seem to at least increment in a logical fashion with new Fusion releases. All that matters here is that you are consistent, because when Munki attempts to install these tools it will pick the highest version it finds in the first catalog in which it is found.

Making it available to clients

The model of how Munki decides what software to offer to clients is simple. A client looks for a specific “manifest” file on the server, matching a ClientIdentifier that’s been configured on the client or, in its absence, several fallback values: the client’s FQDN, its “short” hostname, the Mac’s serial number, or a default manifest called site_default.

These manifests are plists containing an array of catalogs that will be searched for the client, and typically one or more arrays of containing installer items to be installed, updated (only if already installed), made available through an “Optional Installs” self-service interface, or removed.

While we could explicitly set this VMwareTools item to be installed on clients we know to be running in VMs, we can make this smarter and only process this install item if Munki determines that this client is in fact a VM. This way, we can define this in a manifest that may be shared by any number of VMs and physical machines. Munki can include manifests in other manifests using the included_manifests array key.

My testing clients (including my main workstation) all include a manifest called “utils”, which contains a list of software that’s useful for me to always have available on testing machines. This includes debugging utilities, command-line tools, and Mac admin tools that I find useful to always have at hand for testing. Since I’m including this manifest for all test machines anyway, I’d like to just add the VMware Tools to this same manifest, and have Munki figure out whether it’s needed. To do this, we’ll look at “admin-provided conditions” in Munki.

Writing an admin-provided condition

Conditions are Munki’s term for attributes of the client system that can be derived automatically every time Munki runs, and which it can use to conditionally determine whether certain items are installed. The equivalent in Casper is the Smart Attribute, or for Puppet, “facts” derived by the Facter tool. This is a common pattern among client/server management systems.

Munki expresses the conditions using Apple’s NSPredicate syntax, which allows us to define an expression using these conditions and which evaluates to either true or false. If true, whatever installs or removals are defined for that condition will apply to this client. Conditions can also be nested.

Munki includes some built-in conditions for attributes like the client’s OS X version, whether it’s a desktop or laptop, and more. There’s one called machine_model, which reports the model identifier (“iMac15,1″, etc.). Since VMware Fusion VMs use model identifiers like "VMware7,1", we could potentially use a condition that looks like: machine_model BEGINSWITH "VMware". For me this was not sufficient, because for certain VMs I make use of VMware’s ability to “spoof” different model identifiers and test some conditions in a way that better simulate running on physical hardware. Since there’s nothing else built-in to Munki I could use for this, I went the route of writing my own condition.

Munki supports these additional conditions in that it will run any executable files located in clients’ /usr/local/munki/conditions directories. These executables are expected to populate values in a ConditionalItems.plist file in the Managed Installs directory that Munki uses for its data. These are frequently simple scripts that run some system command and extract the data from the output of the command. The data that were interesting in deriving here is simply whether this client is a virtual or physical machine.

I’ve written a basic condition script that provides a value for a condition called virtual, and it’s posted here on GitHub.

Copy this script to a VMware Fusion guest already configured with Munki and able to get updates from a manifest. Make sure the script is placed in /usr/local/munki/conditions, is executable, and not world-writable.

Now we can define a new block in our manifest for this client, using the conditional_items array. Here’s an example of a complete manifest including one conditional item:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>catalogs</key>
    <array>
        <string>testing</string>
        <string>production</string>
    </array>
    <key>conditional_items</key>
    <array>
        <dict>
            <key>condition</key>
            <string>virtual == 'vmware'</string>
            <key>managed_installs</key>
            <array>
                <string>VMwareTools</string>
            </array>
        </dict>
    </array>
    <key>managed_installs</key>
    <array>
        <string>AdobeRUM</string>
        <string>Charles</string>
        <string>SuspiciousPackage</string>
        <string>Xcode</string>
    </array>
</dict>
</plist>

And with this, we’ve seen how we can import VMware guest tools into Munki, and take advantage of Munki’s admin-provided conditions to dynamically install them on clients that can make use of them.

OS X admins: your clients are not getting background security updates

$
0
0

Have I got your attention? The more accurate (and longer) qualifier for this title should actually be: “admins who configure clients to not automatically check for software updates.”

From recent discussions in ##osx-server, some of us have determined that OS X’s “system data files and security updates” will only install automatically if a client is already configured to automatically check for updates. Many sysadmins managing OS X clients tend to disable this setting so that they can control the distribution of these updates, but aren’t aware that their clients are now no longer receiving Apple’s background updates for at several of its built-in security mechanisms, including XProtect and Gatekeeper.

Rich Trouton beat me to this post with his post yesterday, but it prompted me to do a bit more digging into trying to reproduce an issue that comes up when attempting the most obvious workarounds for this issue, which I’ll outline after giving some more context.

There are a couple of reasons admins usually disable automatic checks for software updates. Historically one reason was that their users weren’t administrators, and therefore couldn’t install software updates themselves even if we wanted them to. Since OS X Mavericks, by default any user can install software updates via the Mac App Store interface or just using the softwareupdate command-line tool (although ther are supported ways to configure this setting). A more important reason is that admins often want to maintain some control over when certain updates are actually rolled out to their clients, and do limited testing of system updates. This can be done using Reposado or Apple’s Software Update service included in OS X Server, both of which allow local mirroring of Apple’s own Software Update catalogs for your managed clients (Reposado does a much better job of this).

Disabling automatic checks for updates also has the effect of preventing the system from prompting the user about the new updates. This is usually done in tandem with an implementation of a client management platform like Munki, which is able to provide the user with an interface to install the system updates coming from your own server. We are replacing Apple’s user-facing mechanisms for system updates with our own.

Disabling the automatic checks is typically done by running the softwareupdate --schedule off command as part of a setup script, or setting AutomaticCheckEnabled to false in com.apple.SoftwareUpdate. We end up with an App Store preference pane that looks like this, with nothing checked:

Running `softwareupdate --schedule off` leaves the App Store preferences like this.

Running `softwareupdate –schedule off` leaves the App Store preferences like this.

In recent versions of OS X, Apple began using its Software Update service (which also drives system software updates that show in the App Store or via the softwareupdate command-line tool) as a mechanism for installing “background and critical” updates that are installed silently in the background with no notifications to the user. Here are several families of updates have been seen so far using this mechanism (and there are more):

  1. XProtect, for storing plugin version blacklisting and malicious code signature info
  2. the GateKeeper Opaque Whitelist
  3. Incompatible Kernel Extension Configuration Data

We’ll take the first two as examples: XProtect stores its data in /System/Library/CoreServices/CoreTypes/XProtect* files, and Gatekeeper Configuration Data in /private/var/db/gkopaque.bundle. Both of these sets of files include standard Info.plist files with nice, always-incrementing integer version strings.

Users will never see these updates in the App Store UI. These updates may be run when other updates take place, but they also run on their own schedule.

If you run Reposado or Software Update Service in OS X Server, you’ll see these updates listed alongside standard user-facing updates. If you look at the actual .dist that go alongside these updates, you’ll notice these updates include a config-data “type” attribute up at the top in the options element. (Printing out an update’s distribution file is easy with Reposado: repoutil --dist).

You may be enabling these updates along with other updates, thinking that they will get installed. They might, but only if the clients pointing to your server have automatic checks enabled.

App Store preferences with no automatic downloads/installs, just checks.

App Store preferences with no automatic downloads/installs, just checks.

See some of the undocumented softwareupdate Greg Neagle has documented here, namely the --background and --background-critical options. You might think, what if we just run these commands ourselves on a schedule? With these options, Software Update will schedule a scan (returning immediately) for installing only the config-data updates, but it will not actually install them if background checks are disabled.

I’d encourage you to test this for yourself: find a test client that has been been configured with background checks disabled for some time (these updates, particularly Gatekeeper, are frequently updated by Apple, often at least every couple of weeks). If you can’t find one, you can manually adjust the CFBundleShortVersionString in /private/var/db/gkopaque.bundle/Contents/Info.plist to something lower than the current version, which is listed as the update version in the Software Update catalog (again with Reposado, visible with repoutil --updates). At the time of writing this post, the current version is 52, released December 10, 2014.

Making sure first that you have automatic checks disabled:

sudo softwareupdate --background-critical
softwareupdate[47096]: Triggering background check with normal scan (critical and config-data updates only) ...

While running this you can run a tail -f /var/log/install.log and see the activity log:

Dec 17 09:49:15 host.my.org softwareupdated[494]: Received xpc_event: ManualBackgroundTrigger
Dec 17 09:49:15 host.my.org softwareupdated[494]: BackgroundActivity: Initiating com.apple.SoftwareUpdate.Activity activity
Dec 17 09:49:15 host.my.org softwareupdated[494]: BackgroundActivity: Starting Background Check Activity
Dec 17 09:49:15 host.my.org softwareupdated[494]: BackgroundActions: Automatic checking disabled
Dec 17 09:49:15 host.my.org softwareupdated[494]: BackgroundActivity: Finished Background Check Activity

That was pretty quick. Now go back and enable background checks (via the App Store Preference Pane, softwareupdate --schedule on or a sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticCheckEnabled -bool true) and re-run the sudo softwareupdate --background-critical command. The trigger will occur and softwareupdate will return immediately, but you should now see some more interesting activity in your install.log. Here’s a snippet of mine run today:

Dec 17 10:00:25 host.my.org softwareupdated[494]: Available Products Changed
Dec 17 10:00:25 host.my.org softwareupdated[494]: Scan (f=1, d=1) found 5 updates: 031-14032, 031-14180, 031-14221, 031-14263, 041-9395 (plus 119 predicate-only)
Dec 17 10:00:25 host.my.org softwareupdated[494]: BackgroundActions: 0 user-visible product(s):
Dec 17 10:00:25 host.my.org softwareupdated[494]: BackgroundActions: 5 enabled config-data product(s): 031-14032, 031-14180, 031-14221, 031-14263, 041-9395 (want active updates only)
Dec 17 10:00:25 host.my.org softwareupdated[494]: BackgroundActions: 0 firmware product(s):

There were actually five different configuration data updates found in this run: Apple Displays metadata, Chinese Wordlist, Incompatible Kexts, XProtect and Gatekeeper Opaque Bundle. You’ll see later in the install log that they all get installed.

So given that we can ensure these are installed by enabling automatic checks, what’s to stop us just disabling automatic installations on clients, and leaving the checks enabled? Earlier I mentioned that enabling automatic checks has the effect of prompting the user to install them when recommended updates are found, and since these updates are now part of the App Store application, this prompt may also include available updates for App Store apps (which you may not want if you are already distributing using an institutional Apple ID or have disabled updates of App Store apps by regular users).

Since I wanted to ensure I could reproduce this reliably on Mavericks and Yosemite, I ended up recording a short video. This demonstrates the App Store prompt that seems to occur if automatic checks are enabled, a manual run of softwareupdate --background-critical is done when there are other available updates.

A couple of additional points. If you try to reproduce this, you may have inconsistent results because of the notification system having its own schedule of when it decides to notify the user (relative to when it did last, what feedback the user gave, etc.). The second point is that I’ve had some success on Yosemite in simply disabling the automatic check immediately after scheduling the check. In other words:

sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticCheckEnabled -bool true
sudo softwareupdate --background-critical
sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticCheckEnabled -bool false

In my limited testing on Mavericks, this triggered the prompt anyway, and in my limited testing on Yosemite, it didn’t. But either way, this would be a fragile mechanism to rely on. I would not be confident that I have covered enough scenarios in testing to implement some kind of scheduled script that would run these commands in an attempt to automate these updates on clients.

An another approach to getting these updates out to clients would be via scripting or something like AutoPkg recipes to fetch the packages from Apple’s Software Update servers. This would allow an admin to deploy the packages in any way he/she sees fit. The problem with this approach, however, is that one would need to keep a close eye on exactly what conditions these updates install. This is determined by the pkg .dist files, and specifically a pile of difficult-to-read JavaScript functions doing mostly boolean logic in order to isolate updates to clients meeting specific conditions. The Gatekeeper and XProtect examples I’ve described in this article follow somewhat predictable patterns, but it’s common for updates like these to be split across multiple OSes, merged together, split again, get reposted, etc. It would be a lot of work for someone to continue to audit these distribution files to ensure that the right updates are going to the right clients. A misplaced code signature patch going into your clients’ /System/Library could have serious implications.

The fact that it’s common practice for admins to disable software update checks, and that this disables all installation of config-data updates, seems to clash with Apple’s desire to keep their OS updated quickly and transparently with configuration data that helps the systems function more reliably and securely. I consider this issue to be a security bug. My bug report on this issue is #18939764, which has been classified by Apple as an enhancement request.

darwinup, Apple’s Darwin Update utility

$
0
0

Yesterday in ##osx-server, Pepijn Bruienne mentioned having stumbled upon an OS X system binary he’d never seen before, which was new to me as well: darwinup. This tool is used (or was used – public development of it seems to have stopped around OS X 10.7) for the purpose of managing versions of OS X system components by installing “roots” distributed in a variety of ways. It abstracts several different archive formats and wraps tools like curl, tar, rsync to perform its tasks.

It can install and remove packages installed via rsync-able locations and HTTP(S) URLs, and keeps track (in an SQLite database) of its activity and overwritten files such that it can roll back installations of system components to previous versions. I’ve seen it included on OS X systems as far back as OS X 10.7 (Lion) up through 10.10 (Yosemite). My immediate reaction was that this was like a basic package manager that’s included with every copy of OS X.

Digging a bit further, this tool is part of the DarwinBuild project, whose public development seems to have stopped around 10.6/10.7 (like most other macosforge projects). According to their notes, it is definitely not a package manager, however. These notes contain a much more thorough explanation of the tool than its manpage, so I’d encourage you to read through it if you’re interested in why the tool exists. The manpage has a few useful examples, such as installing components from Apple’s (similarly-abandoned) Roots repository. This repo of compiled OS X components was also completely news to me.

The darwinup tool obviously exists for testing and development purposes, so I would highly not recommend installing Apple’s old roots onto a system you care about, because they are now so outdated and could overwrite critical system components with incompatible versions. Of course, you can always roll back..

Here’s an example of installing compiled bootp tools from 10.7.2. You can also add additional -v options to print out more details about exactly what it’s doing with network and files-on-disk activity.

$ sudo darwinup install http://src.macosforge.org/Roots/11C74/bootp.root.tar.gz

A /AppleInternal
A /AppleInternal/Developer
A /AppleInternal/Developer/Headers
A /AppleInternal/Developer/Headers/BSDPClient
A /AppleInternal/Developer/Headers/BSDPClient/BSDPClient.h
A /AppleInternal/Developer/Headers/DHCPServer
A /AppleInternal/Developer/Headers/DHCPServer/DHCPServer.h
  /System
  /System/Library
  /System/Library/LaunchDaemons
  /System/Library/LaunchDaemons/bootps.plist
  /System/Library/SystemConfiguration
  /System/Library/SystemConfiguration/IPConfiguration.bundle
  /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents
U /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/Info.plist
  /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/MacOS
U /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/MacOS/IPConfiguration
  /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/Resources
  /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/Resources/English.lproj
U /System/Library/SystemConfiguration/IPConfiguration.bundle/Contents/Resources/English.lproj/Localizable.strings
  /usr
  /usr/lib
U /usr/lib/libBSDPClient.A.dylib
  /usr/lib/libBSDPClient.dylib
U /usr/lib/libDHCPServer.A.dylib
  /usr/lib/libDHCPServer.dylib
  /usr/libexec
U /usr/libexec/bootpd
  /usr/local
  /usr/local/bin
A /usr/local/bin/bsdpc
A /usr/local/darwinbuild
A /usr/local/darwinbuild/receipts
A /usr/local/darwinbuild/receipts/bootp
A /usr/local/darwinbuild/receipts/fb5424a830958c1b4cc8191de7b8c6e9d31f1aaf
  /usr/sbin
U /usr/sbin/ipconfig
  /usr/share
  /usr/share/man
  /usr/share/man/man5
  /usr/share/man/man5/bootptab.5
  /usr/share/man/man8
  /usr/share/man/man8/bootpd.8
U /usr/share/man/man8/ipconfig.8
Installed archive: 2 bootp.root.tar.gz
9C140C50-A30E-453D-8F66-01207F4539A8

$ sudo darwinup list

Serial UUID                                  Date          Build    Name
====== ====================================  ============  =======  =================
2      17FEFDD5-E202-485A-B429-E5407881A845  Jan 21 11:33  13F34    bootp.root.tar.gz

Note that the build number is not that of the root that was installed, it’s the build of the currently-running system (10.9.5).

Now let’s run the bsdpc utility that was just installed into /usr/local/bin to display info about available NetBoot images:

$ sudo bsdpc
Discovering NetBoot servers...

NetBoot Image List:
   1. DeployStudio-10.10-1.6.12 [Mac OS X Server] [Install] [Default]

Again, use this with care. We can see that these bootp tools installed system components in addition to executable binaries (from a 10.7 system onto a 10.9 system), so this is just a demonstration of the capabilities of darwinup. Don’t do this at home!

Security Updates leaving mach_kernel visible

$
0
0

In the past, there have been cases where system updates for 10.8.5 (and possibly earlier versions) leave the OS X kernel (at /mach_kernel) visible to users in the Finder. This file has since moved to /System/Library/Kernels/kernel in OS X Yosemite, but previously to Yosemite it is located at /, and included in the package payload for system updates like OS X Combo/Delta and Security Updates.

OS X installers and updaters typically keep this file hidden in the Finder using a tool called SetFile, which is able to set miscellaneous file flags including the “hidden” flag. The Security Update 2015-002 for Mavericks, released on March 9, 2015, does not include any of the postinstall “actions” (miscellaneous scripts and tools executed by a master script) in the installer that were present in the 2015-001 update.

Comparison of SecUpd2015-001 and -002 installer scripts in Pacifist.

Comparison of SecUpd2015-001 and -002 installer scripts in Pacifist.

We have few admin users at my organization, but it has happened at least once that a curious admin user has wondered what this “mach_kernel” file is and moved it to the trash, only to find that their system volume will no longer boot.

Why does Apple continue to ship this bug when they have a knowledge base article on it?

Why does Apple not simply set the hidden flag in the file in the package payload, rather than depend on setting it according to a script? It is possible to set these flags on the file in a payload and not require any scripting to set a hidden attribute on a file.

We can fix this easily by distributing a script to clients that would do something like this:

if [ -e /mach_kernel ]; then
  if ! /bin/ls -lO /mach_kernel | grep hidden > /dev/null; then
    echo "Un-hidden /mach_kernel found, hiding"
    /usr/bin/chflags hidden /mach_kernel
  fi
fi

While Apple’s acknowledged this issue given their knowledge base article, I still felt it’s worth opening a bug for.


Experiments with AutoDMG, System Image Utility and OS version compatibility

$
0
0

SystemImageUtility_128.pngI use AutoDMG to build restorable system images for OS X, which uses a technique similar to System Image Utility’s NetRestore: run the OS X installer on one machine, but targeted at a disk image which is later converted to a read-only disk image, which can be restored to a Mac.

While running the 10.10.3 developer seeds on my build machine I noticed my AutoDMG builds seemed to never complete. After looking more closely at what processes were running, I noticed a suspicious process: /System/Library/Frameworks/Automator.framework/Versions/A/Support/update_automator_cache --system --force, which was called by a postinstall script in the com.apple.pkg.Essentials package. The process wasn’t actually hung – upon inspection using the opensnoop DTrace script, it was continuously re-indexing Automator bundles in an infinite loop.

Sometimes postinstall scripts have issues because there is a missing "$3" in a path, which would be substituted in with the target volume path. In my case, maybe this was just an issue with the beta version of the Automator framework. Here, the update_automator_cache process that’s trying to cache Automator bundles was the 10.10.3 seed version, not the 10.10.2 version that was actually being installed. It runs, however, chrooted in the volume being installed to, meaning it’s acting as though the installation volume is its root filesystem.

Because the issue seemed to be with this particular context of the OS installer, System Image Utility’s NetRestore image creation workflow exhibited the same issue. So, I filed a bug, but Apple closed it immediately due to my combining 10.10.3 tools with a 10.10.2 installer, which they claimed was unsupported. Of course, at the time there was no 10.10.3 installer available.

Yesterday (April 8, 2014) 10.10.3 was released, and I’m no longer able to reproduce the issue, and I was also able to still build a 10.10.2 image on a 10.10.3 host. Allister Banks gave some additional data yesterday, which was that he experienced the same looping process when building a 10.10.3 image from a 10.10.2 host.

What’s interesting is that Apple also just released a KB article which seems to detail my issue exactly in a succinct summary. They recommend always using the most recent version available, for “best results.” However the difference here is that they claim a 10.10.3 host can build “10.10.2 and earlier.” So the response I got in my radar might be not entirely accurate, but this is also confused by the fact that my earlier test was using the developer seed and not a released version.

My own summary is that while there might be aspects of SIU that change from minor releases, and that building an image from an installer source should usually work, if there are issues because of a bug in a tool that’s run in a postinstall script, or due to an incompatibility with versions of system frameworks being used in this context, there’s not much one can do except wait until the update is generally available and a new OS installer is available in the Mac App Store.

Reclaiming inodes from Jenkins

$
0
0

Jenkins.sh-600x600A pet project I maintain is ci.autopkg.org, a Jenkins instance that runs all AutoPkg recipes in the autopkg/recipes repo on a schedule, and reports any cases where recipes are failing. There are currently 126 jobs, for the 126 recipes in the repo.

These AutoPkg recipes must be run on OS X, so there is always at least one OS X slave connected to by the master, which runs Ubuntu on the cheapest Digital Ocean droplet available. Every time a job runs on a slave (currently about every eight hours), Jenkins logs the results on the master in a set of files on disk, known as a “build.” By default, when a new Jenkins job is created, it is configured for builds to be kept forever, even though they can be deleted manually. Builds may also include “artifacts” – binaries or other output from a job – but in my case a build is mostly just state and console output saved from the build run.

This service ran well enough for a year and a half before it suddenly reported having no free disk. And yet, df reported more than 50% of the 20GB volume being available. Using the -i option for df, however, revealed that while I had space available, I had zero file inodes left on the system:

$ df -i

Filesystem      Inodes   IUsed  IFree IUse% Mounted on
/dev/vda       1310720 1310720      0   65% /

Clearly I had way more files on disk than I had any use for, and this would be a good opportunity to prune them. In my case the builds are not useful for much except if one wants to see the history of success and failures of the recipes, and generally we only really care about whether the recipes work today, in their current state.

After a bit of digging in the job configuration, I found where one can restrict the number of builds that are kept – right near the top, “Discard Old Builds”:

Discard Old Builds job configuration

Discard Old Builds job configuration

Since each recipe is its own job and they are numerous, I use the Jenkins Job DSL plugin to generate these jobs programmatically. It may seem odd that this plugin gets invoked as a build step of a job – in other words, you run a job that builds other jobs – but it is quite powerful and is very actively developed.

All I really needed to do to configure these jobs to retain only the last 30 days’ worth of builds was add one line to my Groovy script invoked by the plugin.

Now that these jobs were all reconfigured to keep builds for a maximum of 30 days, what would happen to all the existing old builds? As documented towards the bottom of this issue, the log rotation will come into effect the next time a build is run, so each job would prune its old builds as each subsequent build is run. Or, as the original reporter documented, it’s possible to run a groovy statement in the console to prune all the jobs’ old builds immediately, which I confirmed worked as well.

For more details on how this “AutoPkg CI” jenkins instance is created, check out its configuration on GitHub.

Upcoming conference talks for 2015

$
0
0

Pepijn Bruienne just posted a nice summary of the Apple administration-focused conferences coming up in 2015. I’m also happy to be a small part of several of those coming up:

Anthony Reimer has organized for its second year the MacDeployment workshop, hosted at the University of Calgary’s Integrated Arts Media Labs. I’m looking forward to visiting as the IAML seems similar to the environment I support at Concordia University’s Faculty of Fine Arts, and the Prairies are one of the only parts of Canada I’ve not yet visited.

Thanks to some great timing, one day later I’ll be speaking at the new MacDevOps YVR conference taking place at Simon Fraser University in Vancouver, organized by Mat X and Brian Warsing. Mat and Brian have recruited a great lineup of speakers, so it’s going to be a very packed day. I was born in BC but didn’t grow up there, and it will be almost exactly 15 years since I was last in Vancouver.

These two days (June 18-19) I’ll be giving an introduction to Python in the context of Mac administration.

Finally, I’m lucky to be returning to Göteborg, Sweden for MacSysAdmin 2015 on September 29 through October 2. It was an absolute pleasure last year to attend, speak and meet so many members of our community, thanks to Tycho – and I’m looking forward to traveling there again. This time I’ll be giving a tour and demoing some of my Mac sysadmin tools I maintain on GitHub.

Hope to see you at these great events!

Adobe Creative Cloud Deployment – Overview

$
0
0

EnterpriseApp_256.pngAdobe’s Creative Cloud licensing models add some new layers of complexity surrounding large-scale deployment in organizations. As I’ve been planning and testing our rollout in areas with managed, shared workstations I’m routinely uncovering new information, and the parts of this I think might be useful to others I will cover in several posts. There are several aspects here: 1) simply wrapping one’s head around the different licensing models, 2) understanding differences in the mechanisms with which these licenses can be deployed to machines, and 3) how to maintain all of all this using a software management system such as Munki or Casper. While I can only speak with experience with a subset of the licensing types and my management tool of choice (Munki), this may be useful if you have some of these in common, or you may also be able to port some specifics to another management system.

An additional preface to these posts: Having been looking into this for quite some time I still regularly feel like I’m stumbling in the dark, I cannot keep any of the license agreement acronyms in my head for more than several minutes and in general I usually feel like I’m doing this wrong. Lacking any better guidance, however, I’m documenting some of my findings. I expect to need to revise these strategies over time.

Thanks to Patrick Fergus, who provided some additional details and clarifications about the different license types. Some of the points below are his words verbatim. Patrick has also written a number of very detailed posts on subjects I’m covering in these posts.

First, let’s review some of the new subscription-based licensing models and how that affects the mechanisms used to deploy Creative Cloud.

There are two new axes along which we can categorize licenses:

  • A license agreement type of either Teams (including education-focused licenses) or Enterprise
  • A type of license, including “Named” and “Device” (for education) or “Serial Number” (for enterprise)

Named licenses require sign-in to use the software, and these sign-ins come in three different flavors:

  • Adobe IDs, which are owned by the user and authenticated by Adobe
  • Enterprise IDs, which are owned by the organization and authenticated by Adobe
  • Federated IDs, which are owned by the organization and authenticated by the organization via SAML

Non-sign-in licenses include:

  • Device licenses are “activated” to a machine and consume a license from a “pool” for as long as that machine is activated. Different pools will exist for different product collections.
  • Serial Number Licenses “activated” to a machine and do not report back to Adobe when they are used

Caveat: Because my organization doesn’t have an Enterprise agreement, I cannot speak with actual experience with that licensing model. The approaches I talk about with respect to the “Device License” should mostly apply to the “Serial Number” model used by Enterprise agreements, however.

Here’s a screenshot borrowed from one of Adobe’s help pages on the subject. Note how Education categories can be found in the Teams (top) and Enterprise (bottom) agreements:

cc_select_product

Enterprise agreements have the benefit, besides apparently greatly reduced cost per license, of not needing to track individual device “activations” due to Adobe allowing “anonymous” serialized activation.

If you’ll be deploying device/serial licenses, you need some way to automate the installation of the license. Adobe offers two approaches, built around their Creative Cloud Packager application

  1. Create a device-licensed package, which will contain one or more apps and also deploy the activation when the package is installed. This process also creates an uninstaller that will remove the apps and deactivate that license.
  2. Create a license file, which allows us to “migrate previously deployed named user or trial packages to serial number licenses or device licenses”: This outputs four files, which Adobe calls a “package.” It is not – it is four files, created in a directory, with no accompanying explanation. Presumably we can use this to activate and deactivate a license? (Keep reading to find out!)

cc_create_package

The first seems like a sane option; the application(s) and license are included as a single package bundled together. Munki even supports importing these in a single step along with the accompanying uninstaller package, and has special logic tailored to support uninstalling these, while still using Adobe’s provided uninstallers. This works well if you don’t anticipating mixing Named and Device/Serial licenses, and are doing all licenses from the same pool, or a small, manageable number of them.

If however, your org will also be using Named licenses, or you expect to find yourself handling device licenses in various pools and want to just treat the device license or serial separate from the actual application installers and manage them independently, option (2) (creating a device license file independent of the application) seems to make more sense.

Nick McSpadden and Patrick Fergus also discovered a critical problem with (1), if one creates multiple device license packages from the same pool, for example creating a separate Photoshop and After Effects package both from a “Complete” pool, or multiple serial number packages with the same serial number, removing any one of these licensed application packages will uninstall the license as well.

This is not an issue that would affect everyone – despite moving away from the “Creative Suite [X] Premium” product model, the “pools” (or serial numbers) are still logical collections of applications, so it’s possible that one might just build packages containing all the applications from a pool and not consider a need to add or remove individual applications from this pool on an ongoing basis.

It affects me, however: with many subscriptions to the Complete pool while still not needing half of the applications for many of our workstations, I’m instead opting to build individual application installers that I’d still like to be able to manage atomically without needing to worry that removing one product will cause another to cease functioning. An unlicensed Creative Cloud installation prompts a user with a completely hostile dialog prompt:

ccp_signin

Karl Gibson of the Adobe IT Toolkit team has acknowledged that this is a bug, and it’s scheduled to be addressed in an upcoming update. Also, Nick McSpadden has documented his solution to this “overlapping” install/uninstall issue, which is to combine the licensed installer with the “Named” (ie. unlicensed) uninstaller, so that if a product is removed using the uninstall pkg, the machine remains licensed. For serial number installations this is perhaps more feasible because serial number installations are “anonymous,” and an active installation doesn’t consume a license from an (expensive) pool of licenses.

So, solution (2) it is for me, at least as of today. This is partly to mitigate this bug, and partly to offer a more flexible workflow as the deployment of Creative Cloud pans out. In my environment we’ll most likely be seeing use of both Named and Device licenses, so it is also helpful to be not building and tracking duplicate packages for the same applications.

In posts which will soon follow, I’ll cover the steps involved to build an OS X installer package from CCP’s “Device File Package [sic]”, a couple simple approaches to managing this license package using Munki, and some odds and ends.

Adobe Creative Cloud Deployment: Packaging a License File

$
0
0

CCP_Pkg_128.pngIn the previous post, we covered the scenarios in which you might want to deploy a Creative Cloud device license or serial number separate from the actual applications, as a “License File Package”. Although the Creative Cloud Packager app supports this as a workflow, the problem is that it doesn’t help you out much with regards to the files it outputs.

ccp_create_license_file

Adobe has had the APTEE tool around for a while, as a command-line interface to the Creative Suite licensing tools, to aid with deployment automation – it’s a single executable which confusingly does not include “APTEE” anywhere in the name of the binary: adobe_prtk.

This tool is still around, and has been updated for Creative Cloud. It’s also claimed to be installed as part of Creative Cloud Packager, which is true, but its location is not documented anywhere I could find, so I’ll save you the trouble looking for it: /Applications/Utilities/Adobe Application Manager/CCP/utilities/APTEE/adobe_prtk.

According to the official documentation for the “Create License File” option in CCP, that outputs four files:

  • AdobeSerialization
  • RemoveVolumeSerial
  • helper.bin
  • prov.xml

..there’s no adobe_prtk among those. But it turns out, if we take a look at the strings of AdobeSerialization – which the docs say we can run with “admin privileges” to license the software – some of the first strings found in the binary look an awful lot like flags to adobe_prtk:

com.apple.PackageMaker
3.0.3
AdobeSerialization
AdobeSerialization.log
CreativeCloudPackager
Utilities
##################################################
Launching the AdobeSerialization in elevated mode ...
helper.bin
prov.xml
/Provisioning/EnigmaData
type
DBCS
--tool=GetPackagePools
--tool=VolumeSerialize
--stream
--provfile=

AdobeSerialization seems to be a purpose-built version of adobe_prtk with options baked in. This tool loads your authenticated data and license details stored in an opaque format from the prov.xml file to perform a transaction with Adobe’s licensing servers and commit the results to the local machine’s Adobe licensing database.

Along with AdobeSerialization there’s the RemoveVolumeSerial tool. Unfortunately, as mentioned previously and in Adobe’s official CCP documentation this tool is supported for “Enterprise and EEA customers only” – which means it can’t be used to deactivate a machine that is using a Device license in a Teams-based agreement. In fact, it has an LEID baked in along with the adobe_prtk options: V7{}CreativeCloudEnt-1.0-Mac-GM. (For reference, the current LEID for the Creative Cloud Teams “Complete” product is V6{}CreativeCloudTeam-1.0-Mac-GM.)

We’ve got enough hints in these two binaries to figure out that we can pass flags to adobe_prtk. From my examination, these roughly boil down to using the --tool=GetPackagePools flag for a device (Teams) license (see references to “DBCS” throughout the code, ~/Library/Logs/oobelib.log file and the prov.xml file), and --tool=VolumeSerialize for a serial number (Enterprise) license.

Using the adobe_prtk tool and knowing the LEID of the product we want to deactivate, we can also do what the RemoveVolumeSerial tool cannot do: deactivate a teams-based device. The tool options don’t seem to be different depending on a device or serial license, the issue is simply that RemoveVolumeSerial has a hardcoded LEID, whereas we can know ours by looking up the list, or even better, retrieving this automatically from the prov.xml file.

Based on this examination, it looks like adobe_prtk can perform a superset of the functions these two special binaries output from CCP can do, using a single binary. So in order to build a “licensing package” that can be installed as a native OS X installer package (and deployed with Munki, Imagr, DeployStudio, Casper, etc.) we have our necessary ingredients: we need the adobe_prtk (or “APTEE”) tool, the prov.xml file corresponding to our license, and we know the commands to install and remove the license. Still, we need to know which command flags go with which license type, and we need to set the correct LEID if we want to ever be able to deactivate the license. Why not instead use the binaries that are output by CCP? As I described above, the removal tool will not work for all license agreements. I’d rather not have to keep track of multiple different binaries if one can do all the work.

Since investigating all this I decided this would be useful to encapsulate into a script that removes the guesswork from this, and so it has been put on GitHub here: make-adobe-cc-license-pkg.

It only requires a copy of adobe_prtk, which will be discovered automatically if you’ve already installed CCP on the system running the script, and your prov.xml file output from your “Create License File” workflow. Everything else should be figured out for you, and a package will be output given the package parameters you specify:

$ ./make-adobe-cc-license-pkg --name AdobeCC_Complete --reverse-domain ca.macops prov.xml

** Found DBCS (device) license type in prov.xml
** Found LEID 'V6{}CreativeCloudTeam-1.0-Mac-GM' in prov.xml
** Extracted version 8.0.0.160 from adobe_prtk Info.plist section
** Wrote uninstall script to /Users/tsutton/AdobeCC_Complete-2015.05.27.uninstall
pkgbuild: Inferring bundle components from contents of /var/folders/8t/5trmslfj2cnd5gxkbmkbn5fj38qb2l/T/tmprvCGEI
pkgbuild: Adding top-level postinstall script
pkgbuild: Wrote package to /Users/tsutton/AdobeCC_Complete-2015.05.27.pkg
** Built package at /Users/tsutton/AdobeCC_Complete-2015.05.27.pkg
** Done.

Since I use Munki, and this package can only really be properly “removed” using an uninstall script, this tool can also import the resultant package into Munki and set the appropriate uninstall_script key with an uninstall script that will be populated with the appropriate LEID. Either way the uninstall script will be saved to the package’s output directory for your own use in other systems.

See the repo’s GitHub page for more details and documentation about how the package is built.

One of the Mac sysadmin community’s biggest peeves with Adobe’s AAM installer framework is that when failures occur (and they happen a lot), useful error codes are rarely printed in the context in which one normally monitors package installations (for example /var/log/install.log). Adobe documents their error codes on their website, and so the install/uninstall scripts generated by this package actually report this info to standard output/error so you can at least immediately get a short description of why any failures might have occurred. There will always be full debug output in the various AAM logs, but the locations of these files are rarely easily discoverable or well-named (for example, ~/Library/Logs/oobelib.log).

This tool hasn’t been widely tested (thanks to Patrick Fergus again for his help testing the functionality with an Enterprise license), and it will probably be getting some tweaks or fixes over time.

Moving on, if you were using Munki or some other software management system (and hopefully you are using one of these), how would you “scope” how these licenses get deployed to machines? We’ll look at a short example using Munki in the next post.

Adobe Creative Cloud Licensing and Deployment – Managing Licenses with Munki

$
0
0

munki_transparentPreviously we covered some boring details about Adobe Creative Cloud licensing and how this impacts deploying it to managed clients. We also covered a process and script I came up with that makes it slightly less painful to package up device and serial licenses for distribution to clients. Now, how to we manage these in a software management system? Since I use Munki, I’ll use that as a model for how you might manage this license from an administrative standpoint. This is Munki-specific, but the principles should apply elsewhere.

Munki

With Munki we use “manifests” as our instructions for what clients should do with the software in our repository. We can list items that will be installed or uninstalled, or items that the user can install themselves via a self service app,”Managed Software Center.” We can also use “conditional items,” which let us restrict these rules within conditions that need to be satisfied based on certain criteria scavenged from the client (which themselves can be extended by the admin). Any manifest may represent a single machine or it might be shared across a group of machines. For more details on these concepts in Munki, here are links to manifests and conditional items on the Munki wiki.

Note that I’ll be using the word “pool” a lot here, but those with enterprise license agreement types should be able to substitute “pool” with “serial number.”

Manifest/pkginfo structure

Here are a couple simple ways that Munki could handle managing the apps and licenses: we could add device file installers as separate items and add these to a manifest alongside the actual applications included in (or a subset of) that device file’s corresponding pool. Here’s an example of a Munki manifest with some CC 2014 applications, and notice in the last item I’ve added the licensing package to be installed independently of the applications. These application items are all packages built as Named licensed packages, which means they are effectively unlicensed. See our first post for more brain-meltingly boring background information on this.

<plist version="1.0">
<dict>
    <key>catalogs</key>
    <array>
        <string>production</string>
    </array>
    <key>managed_installs</key>
    <array>
        <string>AdobePhotoshopCC2014</string>
        <string>AdobePremiereProCC2014</string>
        <string>AdobeSpeedGradeCC2014</string>
        <string>AdobeCC_License_Complete</string>
    </array>
</dict>
</plist>

Were we to leave out the AdobeCC_License_Complete item above, these applications would successfully install but a user would need to first sign in to use the software. In this way we can think of the installs as “unlicensed by default” and then we add a device/serial license on top if it’s appropriate. Alternatively users could install the device licenses themselves as optional software, if that would make sense in your environment. In this example above, the Munki admin or help desk would be manually adding the license for a given manifest. If one later wanted to remove and deactivate the license, one would just move the item to a managed_uninstalls array in the same manifest.

Here’s an abbreviated pkginfo for AdobeCC_License_Complete, just to show some important bits.

<plist version="1.0">
<dict>
    [...]
    <key>catalogs</key>
    <array>
        <string>production</string>
    </array>
    [...]
    <key>name</key>
    <string>AdobeCC_License_Complete</string>
    <key>version</key>
    <string>2015.05.19</string>
</dict>
</plist>

The manifest we saw above looks at a single catalog: production, and the license pkginfo belongs to only this catalog. This is a simple example.

For a slightly more sophisticated example, this license package could instead be set as an update for any products we have that would be “licensable” using that license, and be limited to a specific catalog. Our manifest, slightly modified, looks like:

<plist version="1.0">
<dict>
    <key>catalogs</key>
    <array>
        <string>adobe-cc-license-pool-complete</string>
        <string>production</string>
    </array>
    <key>managed_installs</key>
    <array>
        <string>AdobePhotoshopCC2014</string>
        <string>AdobePremiereProCC2014</string>
        <string>AdobeSpeedGradeCC2014</string>
    </array>
</dict>
</plist>

Notice we’ve taken the license as a separate installer “item” out of the manifest’s installs list, and instead just made this manifest look at another catalog named after this pool. Our pkginfo for this license installer is part of this same special catalog, and is an update_for all items we have in our system that are part of the “Complete” pool:

<plist version="1.0">
<dict>
    [...]
    <key>catalogs</key>
    <array>
        <string>adobe-cc-license-pool-complete</string>
    </array>
    [...]
    <key>name</key>
    <string>AdobeCC_License_Complete</string>
    <key>update_for</key>
    <array>
        <string>AdobeAuditionCC2014</string>
        <string>AdobeIllustratorCC2014</string>
        <string>AdobePhotoshopCC2014</string>
        <string>AdobePremiereProCC2014</string>
        <string>AdobeSpeedGradeCC2014</string>
    </array>
    <key>version</key>
    <string>2015.05.19</string>
</dict>
</plist>

Note how we have other items in our update_for list like Audition CC, which could be referenced in other manifests. In this specific example I’m only deploying up to five different apps, but since this device license package belongs to a “Complete” pool, any Adobe app I package later could then be added as an update_for this Complete device license package.

The elegance of the second approach is that Munki can actually handle automatic activation and deactivation for us. If I would decide later to move SpeedGrade to the manifests’s managed_uninstalls list, Munki would check if in addition it can also remove the AdobeCC_License_Complete item, but determine that it cannot because other items in our managed_installs list are still installed. If we were to remove all the applications for which this license is an update, Munki would then go and remove the license, deactivating the machine and freeing up a license.

You might think, That’s fine for this example with a Complete pool, but what if we also purchase a pool that’s just Photoshop? You don’t want to consume a Complete pool license if someone only has Photoshop – but this is why each license installer would be limited to a special-purpose Munki catalog. If we had a “Video apps” pool, for example, we’d just add another catalog named adobe-cc-license-pool-video and make it an update for all the apps we have that could be in that pool.

I’ve rarely used catalogs as a mechanism to separate out licenses for individual business units, but it seems like this could be a use case where it might provide a useful layer of abstraction. On the other hand, managing the license as a separate line item also makes it easier to “convert” a machine back to a Named license model, if manifests are per-machine and the machine changes users, or as budget dictates the allocation of ongoing subscription licenses. In the second example above (using catalogs), if we later remove the special license catalog from a manifest’s catalogs array, this does not mean that Munki will then automatically remove the license pkg from the client. It’s only by actually placing a license in a managed_uninstalls array that Munki actively goes out to ensure the item is removed. So, there are pros and cons to both approaches.

License and application usage tracking

So, those are a couple examples of how you one might approach managing these licenses in your Munki repo and among your clients. Another approach that might be worth considering is to have a more intelligent license-tracking mechanism help manage this for you. Luckily, Munki even has one built in! At this time of writing, MunkiWebAdmin is the only public Munki web-based reporting application that has support for tracking the licenses, but the client uses a simple enough mechanism to determine based on data it receives from a server whether or not it will offer a given item to a user.

Recently Greg Batye gave a talk at the monthly Macbrained meetup, covering how Facebook uses Munki. One interesting thing he covered was how they use a combination of Crankd and Munki conditional items to keep a list of applications that haven’t recently been used, and remove these automatically. This allows them to offer expensive software to a much wider user audience because in many cases the software will be automatically uninstalled after it’s not in use for some length of time. See a video recording of the talk starting around 34:10 minutes in, and the recap with a link to slides.

Next

We’re still not done! We haven’t detailed anything about the actual installation process, we’ve only covered licensing in three dense posts. If you’re not bored yet, stay tuned for some odds and ends about importing these applications and updates using Munki and aamporter.

Adobe Creative Cloud Deployment – Pushing Installers with Munki

$
0
0

munki_transparentWe previously covered a few aspects of Adobe Creative Cloud from the perspective of deploying it to OS X clients. We spent the whole time dealing with the licensing aspects but never talked about the actual installers and updates.

Adobe installers are spoiled

There is a single option available to you for getting the installers: you must use the Adobe Creative Cloud Packager application (CCP for short) to fetch and build OS X installer packages. Because Adobe has reinvented the wheel and opted to use their own custom installer framework, the installer packages that CCP outputs do not use any of OS X’s native installer features – instead the packages simply provide just enough of a mechanism to bundle up Adobe’s own installer tooling (which have actually grown substantially in size in proportion to the actual applications they install) and run them as “preinstall” scripts.

The advantage of having the installers in an installer package format is that they can be deployed using the multitude of tools which can install packages: the OS X GUI, the installer command, Remote Desktop, DeployStudio, and management platforms such as Munki. The disadvantages of this wolf-in-sheep’s clothing packaging system are numerous, but this is the only option we have to deploy Adobe applications efficiently.

Adobe support in Munki

Luckily, Greg Neagle, author of Munki, has done us the service of providing great support in Munki for these spoiled application installers – because historically these packages have needed “help” installing in different contexts. After CCP builds a new package, it outputs both an installer and uninstaller package to our output directory, and we import these packages like any other using munkiimport.

I was going to go into more detail on this whole process, but Nick McSpadden has already done just that in an Adobe CC guide on the Munki Wiki.

This boils down to using Munki’s admin tools to import the base installer package, and then using aamporter to automatically fetch and import all the applicable updates. Nick also details some of the less-standard applications that need some more massaging once in the repository.

Because the process (which creates compressed disk images from the CCP bundle-style packages) is lengthy and you may have many packages to do, I wrote a very simple helper Python script that helps batch these initial imports. Nick has covered this in the wiki article as well, but for reference it’s here.

As far as updates go, one way in which Munki beats all other software management platforms is that it can apply Adobe’s “patch updates” natively without any repackaging necessary. Adobe has for a long time offered the ability to generate packages from their updates, but that requires not only manual work to build them, but a lot of manual bookkeeping to keep track of them. aamporter has been able to leverage this support by grabbing the updates directly from the in-application updater mechanisms and importing these items directly. This also makes it possible to keep CCP installer packages “simple” but omitting the optional updates, and minimizing the need to interact with CCP.

Creative Cloud Desktop app

If you opt to deploy Named licenses, one important thing that distinguishes those installers from device- or serial-number-licensed packages is that when you build the package from CCP, the Creative Cloud Desktop Application (or CCDA) cannot be deselected from the package configuration. With device- or serial-number-licensed packages it can be omitted, with Named licenses it cannot. The “Applications & Updates via the Apps Panel” can be, which might be useful to disable if your users do not have administrative rights or you’d prefer they install apps via your own in-house systems.

CCDA cannot be disabled with Named Licenses, although the "updates via the Apps Panel" can be.

CCDA cannot be disabled with Named Licenses, although the “updates via the Apps Panel” can be.

Alongside the CCDA, the installer package output by CCP will include a LaunchAgent that opens it at login for all users. Users will see it in the menu bar and by default it will show a big login window prompt when it’s first opened.

This is by design, because Named licenses include “cloud services” like storage and collaboration tools. However, if you are just deploying device licenses on top of Named installers (as I covered earlier) and don’t have license agreements that include these features (or your users have no use for them, or are using them in labs), you may want to ensure that this application doesn’t constantly pop up.

This LaunchAgent is located at /Library/LaunchAgents/com.adobe.AdobeCreativeCloud.plist, and so you may wish to remove this in a postinstall script or via some other mechanism. If you have many Named installers, just be aware that the installation of each Named installer will put this LaunchAgent back. I’ve opted to run a script that removes this plist on every machine boot (using a system like outset), a script that I’ve installed as an update for my device license package (built using the method described in this earlier post). Since I intend to keep the Named installers useful for users who actually have Named licenses and could potentially use the features offered via the CDDA, I did not want to automatically remove the LaunchAgent for any installation, only those for which I’ve installed a device license activation.

Because this is a LaunchAgent and not a LaunchDaemon we cannot simply run launchctl unload -w to disable it system-wide – it would need to be disabled and overridden per user. So, if you intend to remove this LaunchAgent to prevent it from auto-launching, more drastic measures are required.

Wrapping up

Are we done here? This is about as much information as I’ve needed to absorb in this process of getting our CC deployment prepared. I have some general thoughts about this whole process that may show up in a later post, and hopefully there aren’t any other major issues that crop up with the approaches I’ve outlined in the previous posts in this series.


Python For Mac Admins session resources

$
0
0

On June 18 and 19, I’ll be giving a talk at the MacDeploy and MacDevOpsYVR conferences, respectively in Calgary and Vancouver. The talk is a whirlwind introduction to “Python for Mac Admins” and is mostly based on code snippets and examples that I’ll be talking through in an interactive Python environment.

If you’d like to follow along, I’m putting the code resources up on GitHub, which you can clone or download a zip archive of from the website. I’d recommend cloning it with Git (git clone https://github.com/timsutton/python-macadmins-2015) so that you can pull any changes I add after this post has been published.

Looking forward to seeing some of you there!

Disabling First-run Dialogs in Office 2016 for Mac

$
0
0

Welcome dialog on first launch of Word 2016.

Welcome dialog on first launch of Word 2016.

This post is part useful tidbit and part lesson in interacting with application preferences on OS X.

Office 2016 for Mac presents “first run” dialogs to the user to market some of its new features. Sysadmins often want to find ways to disable these for certain scenarios. I actually think these are often helpful for individual users, but may be less desirable on shared workstations or kiosk-like machines where users may use Office applications frequently from a “clean” profile that has never launched Office, and the repeated dialog becomes a nuisance.

There has been recent grumbling online about Microsoft’s use of a registry-like format stored in an SQLite3 database for its user “registration” information, stored deep within a group container, and I’ve seen some assumptions that other preferences live here. While this might be the case, it seems like Office stores the “first-run” settings as standard OS X preferences within each application’s preferences. They happen to be sandboxed apps, so they actually end up getting stored inside a given application’s sandbox container. For example, Word:

~/Library/Containers/com.microsoft.Word/Data/Library/Preferences/com.microsoft.Word.plist

Mac sysadmins also tend to get hung up on plists and their paths, when it comes to preferences stored by the OS. Storage location and format of capital-P OS X Preferences, however, is an internal implementation detail that developers aren’t really concerned with. Applications need not know or care where a preference actually gets stored, they simply ask the preferences system to handle reading and writing preferences. We should follow the model of the developers: use either the defaults or CFPreferences methods provided by Apple (either from Python or C/Objective-C/Swift) to set this. Do not use direct manipulation of plist files on disk to set preferences.

Knowing that there are some preferences stored in an app’s container plist, notice how we can still “pick these up” by asking defaults for the prefs for the current user:

➜ ~  defaults read com.microsoft.Word

{
    AppExitGraceful = 1;
    ApplePersistenceIgnoreState = 1;
    NSRecentDocumentsLimit = 0;
    OCModelLanguage = "en-CA";
    OCModelVersion = "0.827";
    SendAllTelemetryEnabled = 1;
    SessionBuildNumber = 150724;
    SessionDuration = "4.805182993412018";
    SessionId = "9FBFC4A2-B0A5-4624-93C5-3811C77E4F1E";
    SessionStartTime = "08/06/2015 20:40:34.905";
    SessionVersion = "15.12.3";
    TemplateDownload = 1;
    WordInstallLanguage = 1033;
    kFileUIDefaultTabID = 1;
    kSubUIAppCompletedFirstRunSetup1507 = 1;
}

I’ve omitted many keys that existed on my machine from Word 2011 (which all start with 14), but we can see there are several that are obviously for the latest version, given the SessionVersion value. The interesting one is kSubUIAppCompletedFirstRunSetup1507, a boolean.

We can test whether this will just work as a system-wide default by deleting our user’s version and then setting it in the any-user “domain” (or “scope”):

➜ ~ defaults delete com.microsoft.Word kSubUIAppCompletedFirstRunSetup1507
➜ ~ sudo defaults write /Library/Preferences/com.microsoft.Word kSubUIAppCompletedFirstRunSetup1507 -bool true

Launch Word again to verify you’re not getting a first-run dialog even though we deleted it from our user’s preferences. Close Word, and verify that kSubUIAppCompletedFirstRunSetup1507 was also not set for the current user – the preferences system doesn’t set a key for the user until the application requests setting it (possibly only if it would differ from that set in the any-user scope, which it didn’t need to because the “first run” has already happened as far as Word is concerned.

Here are other application domains that seem to look for the same preference key (Outlook and OneNote seem to have their own additional “welcome” panes; see Outlook’s FirstRunExperienceCompletedO15, for example):

com.microsoft.Outlook
com.microsoft.PowerPoint
com.microsoft.Excel
com.microsoft.Word
com.microsoft.onenote.mac

What’s Wrong with the Office 2016 Volume License Installer?

$
0
0

Office 2016 for Mac comes in an installer package that has been causing several issues for Mac sysadmins deploying it in their organizations. At least a couple posts exist already for how to “fix” the installer and deploy the software, but I haven’t seen anyone actually detail some of these issues publicly. The best way to “fix” the installer is to have Microsoft fix it so that it can be deployed the same way we deploy any other software. Office is probably the most common software suite deployed in organizations, and so it’s a very bad sign that 2016 for Mac has begun its life as an installer that cannot be deployed without workarounds and/or repackaging.

In this post, as usual I’ll go into some detail about this installer’s problems, review some known workarounds and propose some solutions.

Client software deployment tools

Microsoft provides Office 2016 for Mac in two flavors: one for Office365 subscribers which users can “activate” by signing into their O365 accounts, and one for organizations entitled to a volume license through some agreement. The volume license is activated during the install process, very similar to Office 2011. Volume licensed copies of software are often installed within organizations using automated deployment tools like Munki or Casper. These tools make it possible for IT to deploy the software without numerous manual steps on each client, and control when the software is made available and in what context (i.e. do users install on their own via a self-service system, is it installed automatically at the time the machine is deployed to a user, or later on a schedule, etc.).

There are several ways in which the context of such deployment tools install software is different than that of a user manually installing software onto his or her own personal machine (where the user also has admin privileges), but two important ones are:

  • If installing a standard OS X installer package (.pkg, .mpkg), the installation will take place by some invocation of the installer command-line tool. This happens to set an environment variable, COMMAND_LINE_INSTALL, which is not present if an installer package is double-clicked and run using the standard Installer UI. Installer scripts may make use of this to adjust their behavior accordingly.
  • The installation may take place while no user is logged in, and the machine is waiting at the login window. This may be so because a machine has just had its OS installed or re-imaged, and the deployment tools are now automatically installing all the other software destined for this machine. A software may also require a logout or restart, and therefore the deployment tools may opt to first log the user out so that the software can be installed.

Office 2016’s licensing packages

The volume license installer is provided as a Distribution installer package, which includes two components that specifically pertain to licensing: 1) com.microsoft.pkg.licensing, and 2) com.microsoft.pkg.licensing.volume. You can inspect these packages yourself using a GUI tool like Pacifist or the Suspicious Package QuickLook plugin, or even simpler by using the pkgutil tool that’s built-in to OS X, and just expand the flat package to a temporary directory:

pkgutil --expand "/Volumes/Office 2016 VL/Microsoft_Office_2016_Volume_Installer.pkg" /tmp/office2016

The com.microsoft.pkg.licensing package installs a LaunchDaemon and PrivilegedHelperTool, which provides infrastructure necessary to allow an application to perform the license activation without needing to ask for administrative privileges. This allows the licensing to be performed by any user on the system, and to store an “activation status” in a location that would normally required admin or root privileges. The package also runs a postinstall script that loads the LaunchDaemon, and if the installer was run within the GUI, the bundled dockutil is invoked to add items to the user’s dock.

The com.microsoft.pkg.licensing.volume package installs an application, “Microsoft Office Setup Assistant.app,” to /private/tmp, and runs a postinstall script that runs the binary within this application bundle using sudo, so as to run the command as the user who is logged in. Finally, it removes this application bundle it just installed and exits 0, so that the installation will not be aborted if this process fails (even though the rm command, given the -f flag, should not exit anything other than 0). To know what user is logged in – or the user the script assumes is logged in – it reads the USER environment variable.

Installing Office at the login window

In a command-line install, $USER will be the user running the installer command, and this will likely be root. But this is a side detail. Remember the earlier point about installations not necessarily being performed while a user is logged in? Here is what we see in /var/log/install.log if we invoke the installer while no user is logged in, via SSH, using a command like installer -pkg /path/to/Office2016.pkg -tgt /:

Aug 25 10:45:41 test-vm-yos.local installd[863]: PackageKit: Executing script "./postinstall" in /private/tmp/PKInstallSandbox.lNRt00/Scripts/com.microsoft.package.Microsoft_Word.app.nSM43R
Aug 25 10:45:41 test-vm-yos.local installd[863]: PackageKit: Executing script "./postinstall" in /private/tmp/PKInstallSandbox.lNRt00/Scripts/com.microsoft.package.Microsoft_AutoUpdate.app.eiUGrA
Aug 25 10:45:41 test-vm-yos.local installd[863]: PackageKit: Executing script "./postinstall" in /private/tmp/PKInstallSandbox.lNRt00/Scripts/com.microsoft.pkg.licensing.qL6FmB
Aug 25 10:45:41 test-vm-yos.local installd[863]: PackageKit: Executing script "./postinstall" in /private/tmp/PKInstallSandbox.lNRt00/Scripts/com.microsoft.pkg.licensing.volume.uqxBIt
Aug 25 10:45:42 test-vm-yos.local installd[863]: ./postinstall: _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL.

The last line, “FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL.”, tells us that the application being run by the com.microsoft.pkg.licensing.volume package, Microsoft Office Setup Assistant, is an application that assumes it will be able to access a GUI login session, despite it not actually having any window elements that are normally shown to the user. If you run the Office 2016 installer normally in a GUI installer, you may notice this item bounce up in the dock ever so briefly in order to validate and write out a license file to disk. But in an automated installation at the loginwindow, this process starts and then stalls forever, with a process you can find yourself from running ps auxwww, a line which looks like:

/usr/bin/sudo -u root /private/tmp/Microsoft Office Setup Assistant.app/Contents/MacOS/Microsoft Office Setup Assistant

If I retry the above using Apple Remote Desktop as the package installation tool rather than SSH and installer, the process stalls but I see no error about the failure to connect to the WindowServer, presumably because the installation through ARD causes USER to be considered nobody:

/usr/bin/sudo -u nobody /private/tmp/Microsoft Office Setup Assistant.app/Contents/MacOS/Microsoft Office Setup Assistant

So if the Microsoft Office Setup Assistant tool is hanging during the installation, one might be prompted to use choiceChangesXML with installer so that the problematic package component can be skipped, and the installation can complete successfully.

Office 2011

Office 2011 had a similar issue with its license subsystem, where if Office was installed and updated at the loginwindow (say, on any fresh machine install) without a manual launch in between to at least “initialize” the activation, applications would behave as if Office was not activated at all. The solution, which many organizations employed, was to capture the license storage plist from /Library/Preferences on a known good machine, and redeploy this same license file along with an installation of Office 2011, either a new install or one with the broken licensing issue. Many people have already found that this same approach – capturing the license file in /Library/Preferences/com.microsoft.office.licensingV2.plist and deploying this as a separate package is enough for Office 2016 to consider itself volume-licensed.

I’ve had to implement this solution for Office 2011 before, but it seems ridiculous that an application so widespread, with so much engineering behind it, can’t be installed at the loginwindow without having its licensing data severed – especially given that the licensing procedure doesn’t actually require any user input. The licensing tool runs, and the machine is now licensed. However, with a functioning, standard install, the licensing data is unique per machine. I dislike the idea of messing with (and repackaging, and redeploying) data that normally I should not need to know or care about, for which the application should be responsible, and which if something goes wrong the consequences are serious: Office ceases to function, or instead prompts users to sign in with O365 credentials to activate Office.

More issues

Oh, but this is not even the only issue with the Office 2016 installer. There is also an auto-update daemon that Microsoft adds to the LaunchServices database using an undocumented -register flag to the lsregister binary. This command is also run using sudo -u $USER, but only if not doing a command line install. It looks like the install script is trying to do the right thing by not doing some user-specific tasks (this lsregister command performs a change to the LaunchServices database for the user running the command) during a CLI install, however if this auto-update daemon is not manually registered in LaunchServices, the user will see a confusing dialog the first time they open any Office application and it checks for updates (as it does by default):

Screenshot 2015-08-25 15.58.25

This is another roadblock that a sysadmin will definitely want to avoid, and yet what is the solution? The postinstall script for the com.microsoft.package.Microsoft_AutoUpdate.app component package contains this in the postinstall script:

#!/bin/sh

if ! [[ $COMMAND_LINE_INSTALL && $COMMAND_LINE_INSTALL != 0 ]]
then
    register_trusted_cmd="/usr/bin/sudo -u $USER /System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -R -f -trusted"
    application="/Library/Application Support/Microsoft/MAU2.0/Microsoft AutoUpdate.app/Contents/MacOS/Microsoft AU Daemon.app"
    if /bin/test -d "$application"
    then
        $register_trusted_cmd "$application"
    fi
fi
exit 0

One possible workaround is oddly appealing: don’t install the volume license combined install package at all! Office 2016 updates are actually full application installers, one for each application update. These don’t contain any of the licensing or auto-update related infrastructure that is included in the VL installer. Office 2016 applications will either use the Microsoft Auto-Update (MAU) tool that might be on the system already with Office 2011 if it exists, or if MAU doesn’t seem to exist on the system, the applications will simply not offer any interface with which to check for updates. Most sysadmins that deploy software might like this option anyway, while others might prefer that there is still a means to perform updates ad-hoc, or expect users to all the applications to update on their own.

Some experimenting with the lsregister command hints at other options for trusting other “domains,” references to which I can find only on Charles Edge’s blog:

sudo /System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -domain system -domain user -domain local -trusted /Library/Application\ Support/Microsoft/MAU2.0/Microsoft\ AutoUpdate.app/Contents/MacOS/Microsoft\ AU\ Daemon.app

Perhaps it is possible to “globally” register this daemon in these various domains so that a user doesn’t need to individually register the daemon. Or perhaps a login script might be required to invoke this command at login time for every user, using a tool like outset. Or perhaps admins will simply opt to not install MAU at all, so as to avoid this whole mess.

Patrick Fergus even posted this exact issue to Microsoft’s community forums four months ago, with no response.

Complain More

I’ve quickly skimmed over two issues with the Office 2016 for Mac volume license installer, and have alluded to various workarounds that all involve some kind advanced trickery: using obtuse Installer choiceChangesXML overrides to avoid problematic packages, copying licensing plists from one machine to another, and modifying scripts that invoke under-documented OS X command-line tools with undocumented options. Others online who have posted about these issues have incorporated these into repackaging and custom scripts.

If you wrote the installer packages and scripts for a product like Microsoft Office, how would you feel if you found out that your product was non-deployable in its factory state, and that potentially thousands of sysadmins were breaking apart your packages and putting them back together in ways you never expected, making guesses about how your updates will be structured in the future, and bypassing your licensing creation mechanism altogether? Would you want to support an installation like this?

It’s important to understand the mechanisms being used when installer scripts are involved in software you deploy and support in your organization. It’s unfortunate that some of the most widely-used software also happens to be challenging to deploy, and the amount of effort required to convince vendors that there are issues – even just to get in contact with an actual release engineer – can be maddening at times. But if admins continue to quietly work around major issues like this, we cannot expect the situation to change.

So, escalate the issue through whatever supported channels are available to you. If you are a Microsoft Enterprise customer and have a Technical Account Manager, this seems to be the recommended route. If you’re paying for support, make it worth it. Tweet at and e-mail people who might care and may be in a position to effect change.

Provide hard data about why it impedes your ability to install the software in a supported manner, and the scope of the impact, including the number of machines. Demonstrate that you cannot use supported tools like Apple Remote Desktop, or a paid management tool like Casper, to deploy their software without needing to perform destructive changes to their packages, or deploy “updates” as the base installation and copy a “golden master” licensing plist to all machines just to have the software function. Give specific examples about where the issues lie: suggest that their Setup Assistant tool be fixed so that it may be run purely at the command-line with no GUI login session required; suggest they devise a more robust way of handling the AU Daemon trust issue, so that a command-line install can result in an installation that’s functionally the same as a manual GUI-driven installation.

MacSysAdmin Tools Smörgåsbord

$
0
0
Theatre entrance to the Folkets Hus. CC Image courtesy of Metro Centric on Flickr.

Theatre entrance to the Folkets Hus. CC Image courtesy of Metro Centric on Flickr.

The MacSysAdmin 2015 conference is taking place this week in Göteborg, Sweden. In a session titled MacSysAdmin Tools Smörgåsbord, I’ll be going through a selection of tools that I’ve either written or contributed to and which are available on my GitHub repo.

Here you can find the various links to tools, posts, etc. that appear in the session slides.

Update: The slides and videos from all conference sessions have all been posted here. Thanks again to Tycho and Patrik for their work organizing the conference and releasing these materials for all to enjoy.

Configuration Profiles

Coping with Adobe Creative Apps Deployment

Packaging

FPM

Homebrew stuff

OS Install Automation

osx-vm-templates

winclone-image-builder

Brigadier

Lots more!

python-macadmin-tools

The Office for Mac 2016 Volume License Installer, Two Months Later

$
0
0

pkg_officeIt is now over two months since Microsoft has made the Office for Mac 2016 Volume License installer available for customers in the VLSC (Volume Licensing Service Center) portal. I have previously documented a couple major issues with the installer that impact those who deploy Office 2016 using automated means (meaning anything that doesn’t involve a user manually running the GUI installer).

In this post I’ll summarize two of the major issues and talk a bit about a conference session that was presented just this past week at MacSysAdmin 2015 by Duncan McCracken.

Running at the loginwindow: fixed, sort of

The Office for Mac team has made some progress with one of the major issues with this installer, which was its inability to run the license activation process while at the loginwindow. The latest release in the VL portal at this time of writing is 15.13.4, and it fixes the issue where the license activation (run by Microsoft Setup Assistant) assumed it could connect to a GUI session, which at the loginwindow it cannot.

Unfortunately, they have not yet met what I’d consider the minimum requirement for a deployable installer: that it should be possible to deploy it with Apple Remote Desktop (ARD). While ARD has a (deserved) reputation of being unreliable and is not suitable for ongoing management of Macs at a larger-than-small scale, it’s still an easy-to-set-up tool that you can point a software vendor to as a way to test how well their installers stand up to a typical mass deployment scenario.

The reason the Office VL installer fails at the loginwindow with ARD was already explained in the afore-linked post: ARD seems to set a USER environment value of nobody, and when their licensing tool runs it is run using sudo -u $USER, which seems to fail when the command is run as nobody. I don’t see any reason why sudo -u $USER should be used at all in this case.

Confusing security prompt for the auto-update daemon: still there

The other major issue with the installer is that when it detects COMMAND_LINE_INSTALL, it skips the process of registering the Microsoft AU Daemon application (using an undocumented -trusted option) using lsregister, because this should be done as the user launching the app. The end result is that installing this package without other additional steps will result in a confusing “you are running this for the first time” prompt shown to users, triggered by the auto-update daemon, which is triggered automatically on the first launch of any Office 2016 application.

Working around this issue requires some fancy footwork: setting preferences for com.microsoft.autoupdate2 to prevent it from launching automatically, or using an installer choice changes XML to selectively disable Microsoft Auto Update (MAU) from installing at all. The latter won’t help much if Office 2011 has already been installed, because Office 2011 includes the same Auto Update application, and the 2016 applications will attempt to register themselves with it on first launch. Another option, which requires no modification to the installation configuration, is to instead create a custom script to run the same lsregister command, and run this script by every user at login time, deployed using a tool such as outset.

Admins have also gone the route of simply deploying the standalone “update” packages instead of the base application, as these don’t include the MAU components at all. This is also all documented thoroughly in my earlier post.

These advanced workarounds – repackaging, recombining, reconfiguring and “augmenting” with additional LaunchAgents – are all excellent examples of things that should never be required by an IT administrator for mainstream software. These techniques are typically only needed for niche applications made by software vendors whose release engineers have little interest in understanding the conventions and tools available for the OS platform. Adobe is obviously the one glaring exception here.

The audit by Duncan McCracken at MacSysAdmin 2015

Last week the MacSysAdmin 2015 conference took place in Göteborg, Sweden. Duncan McCracken, whose company Mondada offers a paid Mac packaging service, spent the latter half of his presentation deconstructing the Office 2016 installer.

A video recording of Duncan’s presentation, as well as some his resources used in the demo, can be found at the MacSysAdmin 2015 documentation page (or here for a direct link of the video).

Because Mondada specializes in packaging as a service, Duncan is an expert at doing packages properly, and is experienced with fixing the mistakes made by commercial vendors who don’t properly implement the tools made available by the Installer framework and packaging tools on OS X. Somewhat of a perfectionist, Duncan is used to completely disassembling and re-assembling a flawed package (or one that uses a custom packaging engine – see his 2010 MacSysAdmin Installer Packages session for an example) to make it as “correct” as possible, and using the appropriate mechanisms available in the Installer framework to perform whatever custom logic may be necessary.

The Office 2016 package deconstruction begins roughly halfway into the video. As someone who’s all-too-familiar with problematic installer packages (and Office 2016’s in particular), I found the session extremely entertaining. The parts of Duncan’s demos that didn’t go so well were supposedly caused by a misconfigured (or broken?) shell binary in his OS X VM he was using in the demonstration, and that the process he went through to re-assemble the installer package should otherwise have resulted in a successful installation.

Given that Mac IT admins are still in this awkward phase where OS X El Capitan is now shipping on all new Mac hardware, Outlook 2011 effectively cannot run on El Capitan, and organizations are feeling pressure to deploy Office 2016 as soon as possible, it’s unfortunate that the Office 2016 installer still requires so much “fixing.” I’m willing to go out on a limb and say that Office is the single most commonly deployed commercial software in organizations.

That Duncan dedicated nearly half of his session to this installer package is a testament to how far IT admins need to go simply to deploy software in a manner that provides a trouble-free experience for users. Software vendors do not have a clue that we do this – so don’t think that they are “out to get you” – but when software becomes this hard to deliver to users, it’s time to push back and give real-world examples of the contexts in which we install software and details of the workarounds we implement. You may well better understand the implications of sudo -u $USER in postinstall scripts than the release engineers do, so educate them!

There’s even contact info in a comment from my previous post. If you don’t have an expensive enough agreement with Microsoft (we don’t), it can otherwise be challenging to get a fruitful contact with the engineering team, so this is an opportunity to provide direct feedback.

Viewing all 42 articles
Browse latest View live