SUSE Insider Newsletter

The SUSE Insider is a quarterly posting with the latest tips and tricks, product advancements and industry insights only available to SUSE customer subscribers. If there is a specific topic you would like to have covered, please email

Dan Elder

By: Dan Elder

Where and When to Use Docker

Dan Elder is a Senior Engineer who runs the Linux services team for Novacoast and holds a BS in Computer Science from University of California, Santa Barbara (UCSB). His responsibilities include architecting and implementing solutions to complex problems that other vendors run from. Novacoast is an IT services and solutions company built on broad offerings, deep expertise and a collaborative culture of adaptable problem solving. Novacoast is a SUSE partner.

Introduction to Docker

We've all heard of Docker. Some of you can fully articulate what DevOps is and how containers fit into the larger conversation of intelligent-workload management. For most of us, though, the jump from physical systems to virtual systems was a lot easier to understand than the jump to containers. Even when we can grasp what a container is and what the benefits are, understanding when and where to use them can be challenging. When you're inventing an entirely new paradigm for workload management, there's bound to be confusion. But let's step back for a minute and cover the basics.

Containers aren't new. They've been in other operating systems for more than a decade, and Linux Containers (LXCs) have been part of Linux for quite some time. What Docker does is take a technology that was complicated to understand and implement and make it simple. Suddenly, everyone could build and deploy containers and take full advantage of the technology. SUSE was one of the first enterprise Linux vendors to fully support Docker because it recognized early on the promise of Docker and added the features and support necessary for it to grow beyond venture capital startups and dot com giants. But what does Docker on top of SUSE give you?

Simply put, Docker gives you a number of capabilities, all available in a very flexible and yet fully standardized solution:

  • A consistent packaging environment for you applications
  • An environment fully under version control where the code is the documentation
  • A technology that eliminates all the overhead of having an OS (physical or virtual) between your application and the underlying hardware
  • The ability to massively consolidate resources and allow for rapid and consistent deployments
  • A bridge between developers, operations and security

Just imagine a stateless environment where there is no access to production containers which, themselves, have a minimal set of libraries needed to run your application and nothing else. Having AppArmor (or SELinux) to confine your application adds an additional strong layer of protection. The vast majority of security threats are instantly eliminated in such an environment. The entire DevOps automation philosophy helps ensure that consistency is maintained, and communication barriers are broken down. This is the promise of Docker.

Docker Workloads: Pros and Cons

But as great as Docker and other container technologies are, they're not a fit for every workload. If we look at the organizations using Docker today, we can see where the low hanging fruit is. The move towards micro-services is perfect for the world of containers. The smaller and more discrete the workload, the easier it will be to package up without having to worry about too many moving pieces and dependencies.

Many web applications also fit very well into the world of containers because there is a clean separation between the application and data. Since the ideal Docker workload is stateless, this is the ideal workload. That's not to say that Docker workloads can't store data; it's just that we want to isolate it so that the data exists independently of the Docker container. A database can run perfectly under Docker; we just need to separate the actual database data from the binaries, libraries and configuration data that make up the workload.

Any type of application deployment that's done repetitively is also another good fit for Docker. If you've already gone through the effort of scripting out the installation and configuration of an application, turning that into a Dockerfile is a natural progression and allows for greater control and flexibility. Automation is the key to scalability.

Unfortunately, there is also a significant number of workloads that don't make sense to put in containers. The most obvious are large, monolithic applications with tightly integrated processes storing state or reading configuration in non-obvious places. A lot of legacy applications live in this world, particularly the more complex ones. Think of an all-in-one appliance image that a vendor supplies to you with a database, a web tier and a processing engine all tied together. While you could potentially break out each individual component into individual Docker containers, your vendor won't support that, and how would you ever upgrade it anyway? Alternatively, you could turn the monolithic environment into a Docker container (operating system installation and all), but you lose every benefit of containers and gain nothing.

Other workloads to avoid include GUI-driven application installations, since it's quite challenging to automate them through scripting or other means, which is necessary for building Docker containers. Any application you're only installing once is also not the ideal candidate for anything DevOps because the amount of resources needed to automate the deployment of the application is often much more significant than building it out by hand. Save these types of applications for after you have already finished the lower hanging fruit.

Once you start mastering Docker, you can take advantage of the large and growing ecosystem of partners solving problems beyond what Docker does. These are things like providing an easy-to-manage, web-driven private Docker registry with full-authorization capabilities built in to allow fine-grained access controls to all of your critical Docker images (SUSE Portus). Another example is building out a massively scalable software-defined storage back end for maintaining the state of your Docker containers (SUSE Enterprise Storage using Ceph). Other examples include any of the dozens of other gaps between building out your first Docker container and a fully automated Docker orchestration and scheduling environment that handles workload management globally. It's an exciting new world with Docker. Embrace the tools available to you as well as the enterprise platform and tools that will power your data center of tomorrow.

Joachim Werner

By: Joachim "Joe" Werner

SUSE Manager 3.0: Adding Salt


Joachim "Joe" Werner is the Senior Product Manager for Systems Management at SUSE, working out of the SUSE headquarters in Nuremberg, Germany. He is an Open Source early adopter with more than 15 years of experience in developing and managing open source software for the enterprise.

The IT Future: The Software-defined Data Center and DevOps

At SUSE we believe that the software-defined data center (SDDC) is the future. We also think that DevOps will change forever how software is developed and deployed. How is SUSE preparing SUSE Manager, our operating system lifecycle management solution, for that future? And how do we achieve that with minimal disruption for our existing user base and the functionality they've been used to from SUSE Manager? Keep reading.

First things first: What do we mean by a "software-defined data center? Obviously, software can only run on real hardware. Behind any virtual machine is a real CPU, and behind any software-defined network switch are real ports and cables. But a trend that started many years ago with virtualization of computing resources has now expanded into the network stack and, recently, storage: The underlying hardware has become more and more generic. At the same time, functionality and logic that used to be hard-coded into special-purpose hardware like servers, routers, switches, and network attached storage (NAS) or dedicated storage area networks (which used their own dedicated protocols and wiring like SCSI and Fibrechannel) is increasingly being emulated in a software layer.

This allows for dynamic re-allocations of resources based on the needs of the application. Parameters like CPU and RAM assignments, network topologies and bandwidth, and mass storage can be defined in software and reconfigured on demand. And commodity hardware can, to a large extent, replace expensive purpose-built appliances.

This is where the second trend, DevOps, fits in nicely: In a DevOps world, there is no artificial distinction between development environments and production systems any more. Software is developed and tested on pretty much exactly the same environments that it is later deployed on. With the same highly automated tools that developers use to set up their working environments, QA can build test environments that run a multitude of automated tests every time new code has been submitted. Once the tests succeed, the same automation allows for deploying code into production.

In the world of software-defined IT, the only differences between development, testing, and production environments should be the dimensions and service-level agreements. While an engineer might run a local virtualized environment on a laptop or use infrastructure provided by a private or public cloud, production environments may scale up and out, and provide the redundancy and high availability needed for 24x7 service delivery.

We've shown earlier how open source projects and products from SUSE, such as SUSE Studio™ and the Open Build Service, can help you implement such "continuous integration" environments, where cycles from check-in of new code to deploying it into production can be reduced from months to days or hours without sacrificing quality for agility. In fact, at SUSE we have our own continuous line from code check-ins to package builds, image builds and automated testing.

SUSE Manager 3—with Salt: Preparing for the Future

With SUSE Manager 3, we are laying the foundation for the next wave of automation: event-driven architectures, where software can be deployed and run on a software-defined infrastructure based on rules that describe how the infrastructure should react to certain events.

The key enabler for that is our decision to make Salt, a highly scalable and extensible modular remote execution and configuration management framework, part of SUSE Manager 3.

Originally, our goal was much more limited. We wanted to add more powerful configuration management to SUSE Manager by adding capabilities that customers found in tools like Puppet and Chef. And we wanted to modernize the underlying infrastructure of the proven software and patch management in SUSE Manager to make it more future-proof, scalable and responsive.

After evaluating many options, we chose Salt because it gave us both a strong automation framework with a message-bus architecture that allows us to execute tasks like gathering inventory data from systems or installing software packages on many systems in parallel, and it gave us a powerful configuration management engine that is easy to extend and uses an approachable syntax to describe the desired state of systems.

Later, we learned that we don't have to stop there. Because any system that SUSE Manager 3 controls can become a so-called "Minion" that the Salt server or "Master" can contact via the always-on message bus, and those Minions can send back events to the Master, possibilities for rule-based automation are endless.

This is where SUSE Manager 3 meets DevOps and the software-defined data center. It starts with the tools a software engineer uses to set up working environments. For example, our own developers are making heavy use of a combination of Vagrant, Docker containers, and Salt in a tool called Suminator. Suminator allows us to set up various SUSE Manager environments, from proven SUSE Manager 2.1 to the latest engineering snapshot of SUSE Manager 3, from scratch—again and again. And if Salt can be used to deploy SUSE Manager, you can surely use it to deploy your own services. If you've seen one of our SUSE Manager demos lately, chances are that Suminator was used behind the scenes.

In a DevOps-style continuous integration (CI) environment, Salt's orchestration engine can then be used to listen to events that the build system or CI platform—let's say the Open Build Service or a Jenkins server, emit. So every time a new build is completed, Salt can take over.

If used on an individual system, Salt makes sure that any dependencies that were defined in the so-called state files are met. For example, a software package might require a certain user to be created before it starts. Also, it might have a dependency on a database that needs to be installed, configured, and started first.

But that's not all. Salt can also orchestrate the execution of states on several systems, again allowing users to define dependencies between them. And because Salt has drivers for many public and private cloud frameworks as well as Docker, this is not limited to a specific platform.

Finally, Salt allows you to write so-called Beacons, probes that fire an event if a certain condition is met on a Minion. We demonstrated that at SUSECon 2015 when we used a Beacon that watched file changes and reported them in real time to the SUSE Manager web console, but also allowed a so-called Reactor to trigger an action.

This has a lot of very promising use cases, from implementing real-time intrusion detection to self-healing systems. That kind of event can also be used to trigger a configuration change on a related system. Let's say a new node in a cluster is fired up. Once the clustered application is running on the node, the node can send an event that triggers a configuration change on the load balancer to include the new node.

The foundation for that kind of event-driven architecture has been laid in SUSE Manager 3. Now we are looking forward to your feedback on how they can be leveraged best.

As mentioned before, we've reached the integration of Salt without forcing users of SUSE Manager 3 to radically change the way they are working with it today. Systems that are Salt Minions and systems that are using the traditional SUSE Manager client stack can peacefully co-exist in SUSE Manager 3, and virtually all existing features will work on both stacks.

It's your choice when to start leveraging the new possibilities that Salt opens up to you.

Pete Chadwick

By: Pete Chadwick

SUSE OpenStack Cloud Roadmap: Building the Foundation for the Next-generation Data Center


As Senior Product Manager, Cloud Infrastructure, Pete Chadwick’s responsibilities include comprehensive market and business analysis required to deliver go-to-market strategies for one of our priority business areas—cloud. He has presented at many industry events including LinuxCon, CloudOpen, Open Source Business Conference, and Cloud Computing Expo. He is a published author, including the 2012 Forbes article, "Why Cloud Computing Needs to—and Will—Go Open Source."

What’s Driving the SUSE OpenStack Cloud Roadmap?

SUSE has just launched SUSE OpenStack Cloud 6. This has been the most extensively tested release yet, with customers and partners providing great input on the process. We recently held some webinars with the title, "Give Me Liberty and Give me SUSE OpenStack Cloud 6"—a play on words based on the upstream OpenStack release code named "Liberty," which is the basis of SUSE OpenStack Cloud 6. Of course, the original quote from Patrick Henry is "Give me liberty or give me death"—uttered as Henry was exhorting his fellow citizens to support the cause of American independence from England. We changed the wording to avoid any negative implication with SUSE OpenStack Cloud.

That discussion started me thinking in broader terms about industry trends and the pressures and challenges facing enterprise IT organizations worldwide, especially the pressure to maintain investment levels while simultaneously addressing the challenge to be more responsive to the business and manage the sheer scale of IT in terms of workloads, endpoints and users. Based on the increasing number of conversations we are having with customers on SUSE OpenStack Cloud and some third-party research that SUSE has recently done, it is clear that enterprises are increasingly looking at private cloud as the preferred approach to modernizing the data center. At the same time, enterprises are embracing public cloud providers such as Amazon, Microsoft, Google, and others as a platform to rapidly and cost-effectively provision capacity for some workloads. This leads to what can be called a "data center without boundaries.”

These are the main drivers behind this:

  • Software-defined everything is becoming the norm. First, we had virtual machines (software- defined compute), followed by software-defined storage and networking. The goal of all these efforts has been to abstract away the complexity of the underlying physical infrastructure through a standard set of APIs. OpenStack—with its rich set of APIs and flexibility to handle different hypervisors, storage use cases and network topologies—is becoming the de facto management layer.

  • IT managers are embracing self-service. The lines of business want to try new approaches to managing the business, and this is driving new IT initiatives: Platform as a Service, big data and containerization, to name a few. By providing a stable and flexible foundation on which to build and, more importantly, to manage these different initiatives, OpenStack gives IT the tools it needs to satisfy the demands of its customers.

SUSE OpenStack Cloud: What Lies Ahead

This is the basis for our roadmap for SUSE OpenStack Cloud in 2016 and beyond. SUSE OpenStack Cloud was the first OpenStack distribution designed to address enterprise requirements. Our first release, in 2012, introduced a framework to simplify the installation of OpenStack and accelerate time-to-value for deployments. Since then we have introduced mixed-hypervisor support, which enables enterprises to deploy open source and proprietary hypervisors and have them work together in a single cloud; and we have introduced high availability support for the OpenStack control node, helping to ensure that OpenStack can be a stable and reliable platform for business applications.

With SUSE OpenStack Cloud 6, we are introducing enhancements in all of these areas:

  • The installation framework has been redesigned with a simpler web front end to further ease deployments. At the same time, the back end of the framework has be re-architected so that upgrades to future releases of SUSE OpenStack cloud can be done without requiring downtime for workloads running in the cloud.

  • We have extended our mixed-hypervisor support to add IBM System z support. Initially, systems running z/VM can be added to SUSE OpenStack cloud as compute hosts alongside x/86 servers running hypervisors such as Xen, KVM, VMware ESXi and Microsoft HyperV. Later, we will provide the ability to move the OpenStack control plan to System z.

  • To enhance stability, compute nodes running SUSE Linux Enterprise Server can now be configured as part of the high availability cluster. This addresses the need for enterprises to deploy a platform that has the stability needed for business-critical workloads.

  • We are introducing support for deploying and orchestrating workloads using Docker containers. This is building on our support for Docker in SUSE Linux Enterprise Server 12 and is important because containerization promises to simplify the movement of workloads between private and public clouds.

  • We are adding more enhancements to the ability of SUSE OpenStack Cloud to manage the underlying software-defined infrastructure of the data center with tools such as distributed virtual routing, which improves the performance and resiliency of software-defined networks; file sharing as a service, which enables end users to dynamically configure file systems that can be shared among a group of virtual machines; and support for the latest release of SUSE Enterprise Storage.

In future releases, we expect to continue these trends with additional hypervisor support, improved container management and orchestration, non-disruptive upgrades, and the ability to provision physical servers as well as virtual environments on which to deploy workloads. One new area is the integration of the Cloud Foundry Platform as a Service with SUSE OpenStack Cloud, an effort we are working on with our partner SAP.

To return to the thought that started this discussion, in context of the evolution of the data center and the need to manage and exploit software-defined infrastructure and the public cloud, "Give me liberty or give me death" should instead be, "Give me OpenStack or give me death."

Rich Wiltbank

By: Rich Wiltbank

SUSE Spotlight: An Interview with Rich Wiltbank, Senior Director, Enablement and Training


Rich Wiltbank is Senior Director of Enablement and Training for SUSE. In this role, he directly supports and contributes to the company’s aggressive growth by driving the company’s strategy around technical training and certification, as well as enablement of our partner and internal sales teams.

Q. What has changed in the SUSE training offerings, and what is the goal of these changes?

We’re making changes in three areas: revamping our Training Partner Program, providing more opportunities for technical certification as well as adding and refreshing courses.

The common goal in these changes is to give our customers and partners a sound understanding of open source technologies so they can use our products and solutions successfully in their jobs and businesses. We also want to make it easier for them to get the training they want, which includes how, when and where they want it.

Q. Let’s start with the SUSE Training Partner Program, including the Instructor program. What’s different about it?

We want to ensure that our Training Partner Program makes the high-quality training easy to consume for our customers and, at the same time, delivers value to our partners. In the past we’ve either had too many partners in some areas—so providing SUSE training wasn’t profitable for them—or we’ve had too few partners, making it harder for customers to get the training they want.

The new partner program is focused on three things: delivering high quality, engaging training content about our open source technologies, making that training content easily accessible to our training partners, and giving our customers and partners simple ways to find and consume SUSE training courses offered by our training partners. In short, the program enables our partners and instructors to be much more in tune with what SUSE is doing with training and technology.

We have train-the-trainer courses, designed to ensure that the instructors who teach each course are competent and confident in the specific courses they teach to customers. A certified instructor isn’t necessarily certified in all of the SUSE courses, but is certified on the specific courses he or she teaches. That provides a better experience for the attendees and also helps to make sure the partner is successful.

Our partners make the decision about how to offer the content SUSE provides. They can provide online training or face-to-face, instructor-led training. They can offer the training within their own facility or go to the customer’s office and offer that training there.

One helpful new change is that we’re integrating all partner classes and all SUSE classes available into a single calendar here. Customers can choose the course that’s best for them based on geography, a specific partner, or the type of course—whether they want an online course or a face-to-face course. From the calendar, they can connect directly to that course and vendor.

Q. What’s new with technical certification at SUSE?

SUSE has added online certification testing as well as cross-certification with the Linux Foundation, so more individuals can be certified. Certification is really important. It shows an employer, colleagues and partners that someone has been tested on a set of skills and can perform tasks associated with those skills. Certification validates an individual’s expertise.

In the past, to get certified you had to travel to a training center to take that test. That’s fine if there’s one near your office or home, but sometimes it’s difficult to find a training center close by.

Now, with online technical certifications, you have the opportunity to take the exam and get certified much more easily. When you feel prepared, you go online, set up a time to take the exam, and take it anywhere, as long as you have an internet connection and a computer—at home, in the office, or anywhere in the world. The CLA (Certified Linux Administrator) and the CLP (Certified Linux Professional) exams are available online for both SUSE Linux Enterprise Server 11 and 12. When you take and pass the exam, you get a certificate just as you would have if you took the exam in a testing center.

As before, we have courses that specifically prepare individuals to take the CLA and CLP certification exams. We also have courses available on OpenStack and Ceph, based on SUSE OpenStack Cloud and SUSE Enterprise Storage, but not the certifications and exams. We are in the process of developing both of those now, so you should soon see some announcements about additional certifications.

Q. What is the Linux Foundation cross-certification?

The Linux Foundation has a Sysadmin Certification and an Engineer Certification, which are level 1 and level 2 certifications, respectively, just like our CLA and CLP exams. We’re working with the Linux Foundation to help people who already have its certifications also get certified on the SUSE Linux Enterprise distribution. We’ll accept certification on the Linux Foundation’s level 1 exam as a prerequisite to take our level 2 exam. Once the person passes our level 2 exam (CLP), we will then grant that person the CLA certification as well. The opposite situation also holds true. Someone with a CLA certification (our level 1) can take the Linux Foundation’s level 2 exam, and once the person has passed it, he or she will get the Linux Foundation’s level 1 certification as well.

Q. What’s happening with product- and solution-related training?

We are developing a larger pool of courses to choose from. Right now, we’ve been focusing new courses around new open source technology: OpenStack, Ceph and new open source capabilities for the enterprise environment. We want our customers to be able to take and use this training immediately—both with our products and with other open source technology in the industry.

Q. How do you decide what courses to develop?

We look at what our customers need and how we can help them to be successful, and then we focus coursework around that. In addition, customers can request courses that we seriously consider. We also add courses based on new technology in the market and our own new products—for example, our Ceph-based SUSE Enterprise Storage. We know there will be demand for this training. For that reason we develop courses in conjunction with new product launches. We also refresh existing courses as products evolve.

If someone is interested in training that we don’t offer, we are very happy to get input from them. We have a special email address set up: If you write to that address, the email will come directly to my team. We evaluate those requests and create courses based on our priorities. That’s another way we get input on courses.

Q. Do you do ever go into a customer’s facility and tailor the training to their needs?

Yes. Our internal training teams have provided specific training on our technology, customized for a specific customer. We also encourage customers who want customized training to engage with our partners—especially for customers who need part of their training on SUSE solutions but who also need training on other apps or solutions that we don’t provide. For more information about the training options, visit here.

Kay Tate

By: Kay Tate

Certification Update


  • Kay Tate is the ISV Programs Manager at SUSE, driving the support of SUSE platforms by ISVs and across key verticals and categories. She has worked with and designed programs for UNIX and Linux ISVs for fifteen years at IBM and, since 2009, at SUSE. Her responsibilities include managing the SUSE Partner Software Catalog, Sales-requested application recruitment, shaping partner initiatives and streamlining SUSE and PartnerNet processes for ISVs.
  • Marjorie Westerman is a Marketing Writer at SUSE. She edits The SUSE Insider and SUSE News.

YES Certified Hardware

From November 2, 2015 to February 1, 2016, SUSE published Yes Certification Bulletins for 296 hardware devices—most of them for network servers but also for 28 workstations. Almost a third of these servers were from Lenovo. Other hardware vendors included Business IT AG, Cisco, Dell Computing, Fujitsu, HP/HPE, Hitachi, Huawei Technologies, IBM, Inspur Intel, Lenovo, NEC, Positivo Informatica, SGI, Unisys and VMware.

To research certified systems, go to the Certified Hardware Partners' Product Catalog, search the system name, and click on the bulletin number to see the exact configuration that has been certified.


Here is the breakdown of YES Certifications completed in the period:

  • All of the 28 workstations submitted were certified on SUSE Linux Enterprise Desktop 12. Among the companies submitting workstations was a new company—Business IT AG.

  • More than half of the network servers (90) were certified on SUSE Linux Enterprise Server 12 SP1.

  • Of the remaining network servers, half (44) were certified on SUSE Linux Enterprise Server 12 and half were certified on SUSE Linux Enterprise Server 11 SP4.

SUSE Partner Software Certifications

"Ready for SUSE Linux Enterprise” Partner Software

The software listings in our SUSE Partner Software Catalog (PSC) are constantly being updated by partners as they release new SUSE support statements and add new product features. The catalog now contains more than 7,000 software product listings. You can see summary highlights for a product and then drill down to the level of detail you need for a specific product version with a specific SUSE version and hardware architecture. Our program, which ISVs work with to populate the catalog, is called SUSE Ready. If you have an ISV solution that you want to see advertised on SUSE Linux Enterprise, please refer your ISV to our site. We are happy to work with them.


Some recent updates to the SUSE PSC are:

  • Oracle Database 12c R1 ( Now certified on SUSE Linux Enterprise 12 SP1 Oracle Database 12c delivers industry-leading performance, scalability, security and reliability on a choice of clustered or single servers running Windows, Linux and UNIX. It provides comprehensive features to easily manage the most demanding transaction processing, business intelligence and content management applications.

  • Fujitsu Enabling Software Technology: Open Service Catalog Manager 2015-11 Open Service Catalog Manager is a free, open-source software that is based on Fujitsu Software Systemwalker Service Catalog Manager. It provides a self-service platform for enterprises and service providers to deliver their software, infrastructure, or platform services to service consumers. It also has linkage plugins that seamlessly connect to each type of cloud, as well as other features, including functions for calculating usage fees based on actual usage and for generating reports. By using Open Service Catalog Manager, users are able to raise the operational efficiency of their cloud environments and enhance the convenience of cloud services.

  • Balabit: syslog-ng Premium EditionThe syslog-ng Premium Edition log server tool allows system administrators and security experts to build a trusted, centralized logging infrastructure for reviewing and auditing the log messages of more than 50 platforms. The syslog-ng solution incorporates the functions of clients, relays and servers into a trusted, multi-platform logging infrastructure. It collects and classifies the log messages of operating systems and applications, and transfers them to the high-performance log server in an encrypted and reliable channel where the messages can be processed further and stored in secure, encrypted files or databases. Supporting reliable transport protocols, message buffering and client-side failover, syslog-ng minimizes the risk of message loss, thus suiting compliance requirements, such as PCI-DSS.

  • Cendio AB: ThinLinc 4.5.0 ThinLinc provides several industry-leading technologies and innovations that improve performance, increase security and drive down the cost of delivering Linux and Windows-based applications at the same desktop. It is the only solution that integrates such a comprehensive set of features, enabling it to become a strategic part of your application delivery Infrastructure. ThinLinc is a fast and versatile remote desktop solution. It is based on open source software such as TigerVNC, SSH, and PulseAudio. The ThinLinc server software can be used to publish Linux/UNIX desktops and applications to thin clients. The system also supports Windows Remote Desktop Services. ThinLinc supports redirection of sound, serial ports, disk drives, local printers, and Smart Card readers. Clients are available for a wide variety of platforms. When used with the VirtualGL software, ThinLinc can deliver high performance graphics with OpenGL applications, in a thin client environment.

Sign up to take user tests, and earn Amazon gift cards