Kiosk Mode for Windows + SSO for RDP (part 2)

Last time we looked at Kiosk Mode for WES 7. This can also be used for Windows 7 Professional installs, and I plan to test with XP in the future. That could help provide low-cost kiosks for almost any scenario (libraries, church public use PCs, etc).

Today, however, I wanted to go through my struggles with SSO with RDS. My initial thought was that I didn’t want any prompt on the user screen, I wanted the ‘typical’ Windows Server 2008 R2 login screen as shown below.

Typical RDP Logon ScreenHowever, with NLA or any RDP security enabled on the RDS Session Host or the RDS Connection Broker, you have to authenticate first before you get to this logon prompt. With my original testing, I didn’t want to have a user log in on the thin client, but log in using the terminal server logon prompt. This proved to be impossible due to DNS load balancing. Because of how DNS load balance works, allowing the user to have this logon prompt forced users to have to log in twice if their session was already located on TS2 and their thin client directed them to TS1 when the RDP connection was first opened. So when Joe moved to a different thin client, which was already opened at TS1 at the screen above, but Joe had logged into TS2 earlier, the Connection Broker did it’s job by forcing Joe to log into TS2, but TS2 made Joe log in again. Long story short, Joe doesn’t like that :)

Unfortunately (for my purely personal preference), in order to prevent the double login prompt for our users, I had to change the connection settings to force the users to log in before connecting to the terminal server (shown below). This isn’t really a big deal, other than requiring some minor tweaks that I had made preventing the terminal server from accepting local credentials and preventing the RDP connection from caching those credentials.


The next step was to configure my terminal servers and Connection Broker to accept my local credentials (otherwise users would be prompted twice). The Connection Broker in this step is vital because it’s handling the brokering of your connection to resume your existing session if you have one (hence Connection Broker). Since you’re authenticating before you get to the server, you get redirected to the appropriate server and (if configured correctly) don’t have to type in your credentials again. See my settings below (ignore the missing certificate – that will be fixed before final deployment). For my testing purposes, I have also configured my saved RDP file to “Connect and don’t warn” to servers where it doesn’t recognize the certificate. This will be changed later after I deploy my certificates so I know when something is wrong. Here are my settings from the RD Session Host Configuration screen.

RDP-TCP Properties

Logon Settings

For your situation, you may require more stringent security, so adjust those as appropriate. For my mixed environment, I’ve gone with the lowest common denominator until I can remove some of my older equipment and software. Another important piece of this, assuming you want your users restricted to a single session, is the setting below:


If you’re using a farm for your RDS servers, then you must have this configured on all relevant servers. It’s much easier to use a GPO to configure these settings and then apply that GPO to the OU in which the RDS servers reside. That makes life much easier as you grow your RDS Server environment.

It’s also important to configure your RDP connection that you’re launching on start-up properly. I’ll be posting the text from my RDP file which you can copy into your environment (after personalizing it, of course). In case you didn’t already know, you can open any RDP connection in Notepad to edit the file.

Kiosk Mode for Windows + SSO for RDP (part 1)

This will be a multi-part series on implementing thin clients (or thick clients) with a kiosk mode connecting to an Remote Desktop Services (RDS) farm with Single Sign On (SSO).  I hope to help consolidate 12+ hours of research, testing and configuration changes over the past few weeks, with most of that being this week as we began finalizing our implementation plan for 45 new thin clients.  I hope this is helpful to someone who may be thinking about a similar project, or maybe implement items learned here for something completely different.

I’ve spent quite a bit of time trying to find the solution to this problem for our thin client implementation, and only after several weeks of trial and error was I able to piece together multiple bits of knowledge to accomplish my goal.

Deploying 45 new thin clients to two of our facilities.

Half of these would need to be in a “kiosk” mode, while the other half would need to have access to a local browser (IE), but nothing else.  All of these thin clients would have Windows Embedded Standard 7 (WES) and we’d be connecting to a Microsoft Remote Desktop Server (formerly Terminal Server).

I have been comparing Wyse and HP since they both have solid products, and I was looking for my long term solution for my environment. I wanted something that could allow remote management and updates without having to physically touch each one. Both manufacturers offer a Device Manager application which allows this functionality. Both have minor differences but use the same basic technologies and provide basically the same functionality (at least for what I needed).

In my discussions with the sales reps, I wanted to come up with a way to provide this kiosk type of functionality with an RDP session.  Ultimately, the session must resume if disconnected.  It also must allow users to have a Single Sign On experience to our Remote Desktop Services farm.

HP provides a “Connection Manager” application which allows you to create RDP, Citrix, and VMWare sessions that will automatically launch if someone closes that session. This is a really nice feature, but it does have some limitations, which I’ll get to later. Dell does not offer that type of application, but their engineers had published some ways to hijack the Windows shell to create this same scenario for a Citrix environment. Armed with this documentation, I started looking at ways to convert this to an RDP connection.

- Persistent RDP connection to Windows 2008 R2 RDS Session Host (connection must reopen automatically if user closes the connection)
- Single Sign-On for RDS (Terminal Server) users. Our users will float between stations and need to be able to resume their previous session. This is a must for us to be successful in implementation.
- Single Sign On must be true SSO – without a proper configuration (as I learned the hard way), multiple RDS servers with a Connection Broker will let you resume your session, but you may get multiple login prompts.

- Windows Server 2008 R2 Connection Broker
- 2x Windows Server 2008 R2 Remote Desktop Services Session Hosts (formerly called simply ‘Terminal Servers’) configured to a farm.
- 60 total Windows Embedded Standard clients with the future expansion to introduce existing Windows 7 Professional clients into the RDS environment.
- Single ‘kiosk’ user (already configured on thin clients) with autologon enabled.

During my research, I came across several different posts on various sites that helped get me to my final solution.  I’ll share those below, as those folks helped me through this arduous task and putting the pieces together is all I can claim as my own.

First, a post on Spiceworks’ forum provided the “persistent” part of the RDP connection.  Specifically, “Dustin M” posted this nugget:

i have been experimenting with this as well. in my case, it was statically map a PC to a virtual pc, so i created an RDP file on the thinclient to point to the virtual instance.

then i set the pc to logon automatically as a local user, and changed the shell value in:

HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon

to c:\connect-rdp.bat

the batch file contains:

echo off
start /wait c:\windows\system32\mstsc.exe c:\connect.rdp
goto start

so as long as the PC turns on, it automatically connects to the RDP session.

The only issue with this particular solution is that there’s a command prompt window that someone may feel like they want to close, even though it says “DO NOT CLOSE THIS WINDOW.”  But it was definitely worth a shot.  I added that to my string for the shell, and it worked! Sort of.  On WES7 it only worked on the Admin user, and I’m fairly certain it was because when I was loading the registry hive for the ‘kiosk’ user, something was locked down (from the manufacturer’s image) on their account that prevented them from accessing the batch file I had created.

As I continued looking through the registry to figure out what I was missing, I noticed the following string in another key:

hidewin c:\windows\filename.bat

The .bat file was simply a batch command to create temp folders on the RAM drive for the thin client, but that “hidewin” made me curious.

(insert Google)

Turns out HideWin is awesome.  Briefly, it hides windows of running applications from the UI.

I adjusted my string for the shell, and it worked! Batch file window was hidden, and RDP connection worked as promised – but again, only on the Admin user.  More research led me to something that I’ve somehow overlooked in Group Policy for years now – “Custom User Interface.”

Our friends at Microsoft offer a “Custom User Interface” that can be deployed via Group Policy – including local Group Policy.  Simply open GPEDIT.MSC, target to your ‘kiosk’ user, and navigate to User Configuration -> Administrative Templates -> System -> Custom User Interface (   This, I found, handles the hidewin c:\connect-rdp.bat command perfectly!  So now, I have a batch file that opens an RDP connection (and re-opens that connection should a user close it), and something hiding that command prompt window.  The beauty of the GPO solution is now my Administrator user has a normal Windows shell, and doesn’t need to have anything special done when I need to log in for administration purposes in the future.

That’s it for the configuring a kiosk mode to use an RDP connection.  Check out part two to read about SSO for the Remote Desktop Services environment.  Later in this series we’ll look at getting the RDP file configured properly for this kiosk we just built.

What is your biggest technology struggle as a non-profit/small business?

The Importance of [good] Project Management and Communication

I know it’s been a long time since I last posted…and for that I apologize.  Now that life has settled down a bit, I’m able to dedicate a little time to this…that being said, let’s dive right in!

I am constantly reminded of the need for a client to have a good project manager, especially when working between multiple vendors.   Most software vendors have project managers that will work directly with the client on a particular project or task (depending on the size and scope of the task), but when multiple software vendors are involved, many times the customer gets lost in the shuffle if they aren’t careful to have a strong project manager.  From the customer’s perspective, setting proper (and realistic) expectations from the beginning is key.


Scope: Vendor A and Vendor B build a bi-directional interface to share information across the two applications.
Expectations: Vendor A configures interface for both applications, Vendor B will verify complete implementation and schedule training with the client.
Initial Estimated Timeline: 2-3 weeks after initial install

Actual Process:

Day 1-3:  Vendor A completes the interface installation, and informs the client that everything is complete.   Vendor B completes first set of tests and finds 2 bugs which need to be addressed by Vendor A.  Customer is given the steps necessary to verify bugs are fixed.

Day 5-7: Vendor A resolves first set of bugs, Customer tests and verifies.

Day 8: Customer discovers that Flat file from Software A is needed to import into Software B.  Requests the appropriate format from Vendor B and sends to Vendor A.

Day 14-19: Vendor A creates flat file and imports into Software B. Customer finds possible bug, requests information from Vendor A.  Vendor A states all information is correct, asks for Vendor B to verify completion.

Day 20: Vendor B verifies import was correct, but bug that customer found does exist.  Vendor B and customer confirm exact problem and provides documentation to Vendor A.

Day 24-31 – Vendor A resolves final bugs and releases client for training.


What you don’t see in this process is the dozens of emails from the client to both vendors and multiple phone calls that took place to resolve the various issues that should have been taken care of on Day 1.  Imagine this process without a client advocate [read: project manager] to manage this process and maintain the information flow between both vendors and the client.   Without diligent communication and follow-through by the project manager, what would have happened is that on Day 1 when Vendor A completed their steps, the client would have called to schedule their training with Vendor B.

This is because the client assumes that Vendor A and Vendor B have already done this process hundreds of times, and that everyone already knows what they’re doing.  Unfortunately, there are too many variables in software implementations that what may seem simple (even to technical staff) is not necessarily so simple.

As you can see from this scenario, it took 31 days rather than the 14-21 days that were originally estimated.  According to both vendors, 14-21 days was a very exaggerated time-frame, as the process was “simple” to finish.   In this case, it wasn’t so simple.  To me this stresses the importance of a project manager on the client side of the project.  It helps for that person to be knowledgeable enough to actually get involved in the nuts and bolts of the implementation to keep both vendors honest.

This is an important part of the vendor-customer relationship going past the project, because a smooth, timely project helps create a positive relationship for both the client and the vendor(s).   Remember all those vendors who couldn’t manage a project to save their lives?  I sure do.  In fact, I’ve even been on the vendor side of the unmanageable projects that turn out to be a nightmare because of any number of reasons.   No vendor ever tries to create a project that doesn’t go smoothly, so anything the customer can do to help the process is extremely important.

Since I try to focus on nonprofit organizations here, I know that a staff member dedicated to project management can be quite expensive and that’s not necessarily feasible for each organization.   If I had any advice for those looking at upcoming projects or even current projects, it would be to find the person that is your “Super User” (for application projects, etc.) or is one of the folks that this project is designed to benefit (stakeholders) to be involved in the project management, even if only in a limited scope.  It’s important that the person involved has a stake in the success of the project, because that gives them motivation to see the project come to a timely, successful completion.  And while project timeliness is important, a complete and successful project that is 3 days over the due date is better than a project that’s on-time but has many outstanding issues that will ultimately carry past the due date to resolve or cause significant technical, operational or logistical problems in the future.

If you are needing technical assistance with project management for vendors, please feel free to get in touch with me.  I have a consulting company that specializes in Information Technology needs for small to medium-sized organizations, and project management is something I do as part of my consulting.

A discussion on desktop security

Desktop security can be very frustrating for IT professionals as we try to find the delicate balance between security and user experience.  Most end users take things for granted and don’t realize the potential danger that lays in wait in advertising and other lovely pop-ups that warn of impending doom of their computer or hard drive that must be saved by this “free” software!

For those that wonder how this may apply to nonprofits, just remember that productivity is just as vital (if not more) in the nonprofit world as it is in the corporate world.  Having lost time due to a virus outbreak is annoying, but lost or stolen data, downtime because your ISP has turned off your internet access due to SPAM emails being sent out, etc. is much more than just annoying. It can take a lot of time to rectify the virus problem, and that doesn’t include the network cleanup, ISP phone calls and calls to customers or donors letting them know their data was either lost or stolen.

Antivirus software such as Symantec, ESET NOD32, or my personal favorite, Vipre Enterprise, can only do so much with a persistent user who really wants to install that free software.  There’s only one sure way to prevent that software from being installed – and that’s to prevent the user from installing it at all.

This is probably one thing that most IT staff (volunteer or paid) get soft on with their end users – local admin rights. Being a local administrator, for those that don’t know, gives you the “keys to the castle” to make all system/registry/file changes, which also allows any software you install to make those changes.  Virus, spyware and other malicious software (well, all software, actually) will run within the security context of the logged in user.  This means, to continue the key analogy, that any virus that attempts to run on your computer has access to everything door/room that you do.  Microsoft notes this on many, if not all, of their security patches in a statement similar to this:

“An attacker who successfully exploited this vulnerability could gain the same user rights as the local user.  Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.”

For management, power users and even IT personnel, it’s fairly often that software needs to be installed, updated or system changes made for a variety of reasons, so local administrator access is really the default in most people’s network configuration.  Step 1 – Join PC to Domain, Step 2 – Add user to local Administrators group.  The alternative is to log off and then log back in with an administrative user to make any system changes. This is not only annoying, but it’s also inefficient.  However, virus outbreaks, as I noted earlier,  are much more costly.

In my network, I apply the following principles:  1) No local administrators unless a specific application requires it (there are some older applications that do require it).; 2) Use Restricted Groups in Group Policy to assign workstation administrator accounts (that are not domain administrators) to all PCs within the domain.; 3) Use Vipre Enterprise for antivirus and malware protection.

Restricted Groups are a very powerful tool in Group Policy to assign users to specific groups on a local machine, but they must be used carefully.  Restricted Groups is a wipe/replace setting, which means that any user(s) you put in the local Administrator group will replace the existing users.  So be sure to add the “Administrator” account in addition to any domain accounts you would like to add to the Administrators group.  More information is available from Microsoft here (

For those that don’t have any special applications that require administrative permissions can feel free to quit reading now, as I know this is long-winded.  But for those that want to implement these security measures but have some older applications that require local administrator access, keep reading for tips on that.

Read more of this post

A New Beginning

After a lapse in my attempts at writing/blogging on all things technical, and some opportunities to share my knowledge and experience with others, I felt it was time to write about things that not only are IT-related but that are helpful to nonprofit organizations who struggle with a lack of IT leadership.

Traditionally the nonprofit sector is lacking in leadership in technology, and it ultimately hurts the organization with increased costs or less-than-spectacular results.  I plan to share not only high level items, but also some tips for use in day-to-day IT infrastructure.

Not all of my posts will be nonprofit-specific, but I hope that those in the NFP sector can benefit from my experiences and challenges in a nonprofit world.


Get every new post delivered to your Inbox.