Plugable Powerful USB and Bluetooth Devices Thu, 02 Jul 2015 00:04:11 +0000 en-US hourly 1 Plugable’s New Combination USB Power Bank and Wall Charger Mon, 22 Jun 2015 16:08:46 +0000 We already have several great USB chargers and USB power banks on the market today, but we wanted to offer a new and unique product that combined the best of both worlds. We’re excited to announce our new PB-WA5K 2-port USB 5,000mAh power bank and AC wall outlet pass-through smart charger.

We wanted it to be as simple to use as possible. That meant removing confusing buttons and vague LED light indicators for a more straightforward and elegant design. The trouble with most USB power banks is that while you’re on the go and charging your devices, eventually the power bank battery also needs to be recharged. Our solution was to take functionality similar to our USB-C2W 2-port wall charger and our PB-6K2 6,000mAh power bank and combine them together into one compact and convenient package with some great new features.

While on the go chances are you won’t have access to another USB charger to charge your power bank, but you may have access to an AC wall outlet from time to time. The PB-WA5K has built-in flip-out AC prongs to recharge the power bank battery anywhere you have access to a US/Canada/Japan style AC wall outlet. In addition, while the power bank’s battery is recharging, thanks to a new pass-through charger design, attached USB devices will also charge directly from the available AC power. So when you reach the hotel and plug in the power bank and your devices to it, everything will be charged by morning.

The dual USB port design allows you to charge two devices at once while wall connected. Both ports are controlled by a smart IC and are capable of charging two completely different devices simultaneously at their maximum rate. To turn on the charger we added a motion sensor, simply shake it quickly side to side and the status LED will illuminate. When plugged into an AC wall outlet the charger will remain on until it is unplugged. When used as a portable battery pack, it will automatically turn off after 1 minute if no USB device is attached and being charged. The status LED will let you know what is happening with easy to follow color codes for battery life.

Have any questions? Just comment below or email We’re happy to help!

]]> 1
The Scales of Windows Never Seem to Balance Thu, 04 Jun 2015 17:44:34 +0000 As a technical support engineer for Plugable Technologies I see a lot common threads in the questions asked by our customers.

A typical example would be:

“I bought your dual monitor docking station to use with my Lenovo Yoga 2 Pro. It seems to work OK, but the size of the information on my two external displays doesn’t look right. Is this a problem with the dock?”

The short answer to this question is, no it is not a problem with the dock. The long answer is that the problem lies with how Windows scales information on multiple displays. What does that mean exactly? Let’s break down the example above…

We have a Yoga 2 Pro in our test lab like the one the customer is using and it has an internal 13.3 inch diagonal display that supports a maximum resolution of 3200 x 1800 pixels. Simply put that means if the entire display was a sheet of graph paper, there would be 3200 columns across the page horizontally and 1800 rows down the page vertically. Each single block on the page would represent one picture element, or pixel. These pixels are illuminated with different colors to form the images you see on the screen. This is very simplified version of what is happening but for our example it works just fine.

A typical monitor that we could add to this system using our docking station would have a resolution of 1920 x 1080 pixels in a 24 inch diagonal display. In our test case (and to replicate what the customer has) we add two of these monitors using the dock to the Yoga which is running Windows 8.1. After adding the additional monitors I notice that the icons on the newly added monitors look much bigger than I expect. What is going on?

The answer lies in how Windows is trying to scale the information on each monitor connected to the system. Windows will try to automatically scale the content on each of these displays using an equation to make everything appear in the best way possible. Sometimes this system works very well and in other instances not so much, but why? The answer has to do with pixel density.

Remember that graph paper analogy I used earlier? Let’s take a look at a real word example:


The image above shows the same background image on both displays, but at a different resolution. This is a simplified example of our graph paper analogy but shows the how the difference in pixel density of each display can cause the same objects to appear differently.

So how does that explain our mysteriously large icons from earlier? Windows by default will try to scale the content on each monitor to make everything look as good as possible. When it is faced with such a large disparity in terms of pixel density, resolution and physical size it can wind up scaling things too much and cause things to look out of proportion.

So how do we deal with this? Windows allows us to manually control the scaling settings via the ‘Display’ application within the Control Panel.

Scaling Slider

There is a sliding control that allows us to change the scaling of items on all of the screens connected to the Yoga from smaller to larger by dragging the slider from left to right. However, this method is still applying Windows scaling equation to each monitor differently. If we want to set the scaling to the same value for each display, we can select the checkbox for ‘Let me choose one scaling level for all of my displays’

Scaling Radio Buttons

Now we have options to pick the same level of scaling for each display by clicking a radio button to pick between 100% (the default) up to 200%. However, Windows 8 does not allow for setting the scaling manually for each display connected to the system. You have to make the choice to use the slider to allow Windows to scale each display independently or manually pick the same scaling factor for all. The next release of Windows, Windows 10 will be the first to allow you to manually pick the scaling on each display.

So what does all of this mean? When you have a Windows system with multiple monitors with widely different pixel densities, resolution and physical size the automatic mechanisms to scale the image on each monitor may not be ideal. You can manually change these settings to something that suits your personal preference using the options in the ‘Display’ application. This behavior is not just limited to our docking station but can occur on any Windows system.

I know that is a lot to take in, and that is even with us presenting a simplified version of what goes on behind the scenes. To sum everything up neatly:

1. Monitors can have not only different resolutions but also different PPI/DPI measurements depending on physical size
2. Windows attempts to take these factors into account when scaling content on multiple displays
3. The effectiveness of this automatic scaling can vary depending on the disparity between all the displays connected to a system
4. While there has been some level of end user customization of scaling, the first version of Windows that will allow for manual scaling adjustment of individual displays in a multi-monitor setup is Windows 10

I hope this information proves useful and allows you to make use of multiple displays well in the future. I relied heavily on many sources while writing this post, and I list them below should anyone wish to dive deeper into what is going on behind the scenes.


]]> 0
The New and Improved Plugable USB2-MICRO-250X Digital Microscope Mon, 01 Jun 2015 14:57:38 +0000
We have received overwhelming positive feedback from schools, hobbyists and awesome parents making science a lot more interesting to their children with the USB2-MICRO-200X. So you might ask, why improve on something that is already a favorite with the customers? The answer is simple: Because we knew it could be better!

The USB2-MICRO-250X boasts improvements in both hardware and software. The stand is larger and more versatile, the microscope body is much more compact and stronger, the LEDs are more powerful, and the snapshot button is no longer a “button”. We’ve even released entirely new software that is both easier to use and universal. Read on to see why we went in the direction we did, and what it means for you.



Left: Original metal ball joint stand, Right: New flexible arm observation stand

The stand is by far the largest improvement over the original. The original stand had ball joints that allowed positioning of the microscope body over the object. Besides being too small to view larger objects, constant re-positioning eventually wore out the ball joints and the microscope would no longer hold it’s position. The new stand addresses these concerns by using a flexible arm with a suction cup base. This arm can be positioned in any number of ways, doesn’t wear out, and the suction cup can be attached to any smooth surface or to the included base.

Microscope Body


Left: Larger original microscope, Right: New compact microscope

Overall, the new microscope body is much more compact and robust than the original. The original body, with enough force, could be broken apart by our younger users. The new one is about an inch shorter and the focus ring fully surrounds the body, making one-handed operation much easier.

The finish on the old microscope was also rubberized, where the new isn’t. The rubberized finish had a tendency to acquire scratches and fingerprints over time. The new microscope is thick, satin finish, injection molded plastic, which means much less scuffing and no fingerprints left behind.



Left: Original 5mm LED array, Right: New SMD LED array

The original light used an array of eight 5mm low power through-hole LEDs. We were able to cut the number of LEDs in half by using much more efficient surface mount LEDs. This also allowed us to mount a diffuser in front of the LEDs which makes for much smoother light distribution and less glare. Also, unlike the original microscope, the LEDs now turn on only when the microscope is being used.

The following comparison images were taken at the same distance and same resolution. Notice the much more even light distribution that lets you see all the dips and valleys, not to mention the higher magnification.


Left: New lights, better contrast, Right: Original lights, washed out

Snapshot Button


Left: Original snapshot button, Right: New capacitive touch button

The new snapshot button isn’t actually a button at all, but a capacitive touch sensor. This greatly improves your ability to take quality photos over the original. With the original having a physical button, you had to use a certain amount of force to depress it. This caused photos to sometimes come out blurry or not as you originally positioned, since you would have to stabilize the microscope while you’re pressing the button. This led to many customers not using the button at all, and just clicking the snapshot button in the software instead.

The new button only requires the lightest touch to take a picture. While this can lead to unintentional pictures being taken when handling the microscope, it’s far better than having a button you can’t use.

Software and Compatibility

We recently released our completely new Digital Viewer software to users in the original microscope page. This software was developed with the new microscope in mind, but it’s actually a universal webcam application which requires no proprietary drivers so it will work with any video capture device. This avoids all the issues we had with the old software which had detection problems.

The new software also has a completely fresh user interface. The biggest visual difference is the large, culturally universal icon buttons for camera functions at the top of the window. But we also added direct folder navigation to the main window so you can browse through your entire library without having to trudge through menus.

So if you are a fan of SCIENCE, or need a new digital microscope, the USB2-MICRO-250X will be your trusty lab partner. Keep scrolling for a range of sample pictures taken with the new microscope.


And on top of all the technical improvements, we’ve worked with our suppliers on volume orders to get the cost down.

Have any questions? We’re happy to help! Just comment below or email anytime.

Where to Buy

]]> 0
Connecting your existing USB devices to a new USB-C laptop or tablet Fri, 29 May 2015 19:02:56 +0000
Plugable USBC-AF3 Adapter shown connecting
a Plugable USB3-HUB3ME hub + Ethernet adapter to a MacBook.

One of the wonderful things about USB has been backwards compatibility. You can take a USB 1.1 device like a mouse from 15 years ago, plug into any computer today, and it will work.

The new USB Type C connector (USB-C) changes things around in terms of physical compatibility — you can’t just plug old devices into the new, smaller “Type C” port.

But all the wiring is there for full backward compatibility with no performance or functionality lost. It just takes is a simple cable to convert the size of the connectors. The new Plugable USBC-AF3 is a high quality version of this kind of cable.

So if you have a new 2015 MacBook, 2015 Chromebook Pixel 2, or any of the other coming flood of USB-C devices, this cable will let your older USB “A” device work with it. As always, just let us know if you have any questions, we’re happy to help.

Where to buy

]]> 0
Kickstarter for Plugable Ultimate USB-C Dock Passes 5K in 24 hours Fri, 29 May 2015 16:04:05 +0000

The Plugable Ultimate USB-C Dock has over $5K in backing in 24 hours on Kickstarter.

USB-C will be the biggest story in how we connect things to computers over the coming years. It supports simultaneous charging and device connectivity in ways that previous USB connectors could not.

But a laptop, tablet, or phone with just a single USB-C connector (which will be common) needs a dock to connect everything. This is the situation for Apple’s 2015 MacBook which launched in April.

It’s technically challenging to deliver a dock which implements USB’s new Power Delivery specification, so most of the docks announced so far re-use the laptop’s own power supply. Plugable’s dock does two things which are useful and unique in combination:

1) The Plugable Dock fully implements USB Power Delivery, includes its own power supply, and charges any PD-compliant device like the Macbook. So you can set aside your MacBook or other power supply as a spare.
2) Some USB-C computers and phones will support an external monitor directly via USB-C. Some won’t. Some will allow drivers to be installed, some won’t. Our dock includes 3 graphics outputs: one on USB-C’s built-in VESA Alternate Mode, and two via the well-known DisplayLink technology. So we’ll cover the maximum amount of systems with at least one or two monitors — and on most Mac and Windows systems, up to 3 monitors

An interesting example of the challenge is the coming generation of Android-M phones with USB-C support that Google announced yesterday. The dock will provide several useful functions on any USB-C Android phone, and we’ll describe that in a coming post.

If you have a USB-C system currently (MacBook 2015 or Chromebook Pixel 2015) or are planning to get one of the many USB-C systems coming this summer and fall, we hope you’ll consider backing our Kickstarter and getting your own ultimate USB-C dock.

Quick link to the kickstarter:

Have any questions or comments? We’d love to hear from you. Please feel free to comment below.

]]> 0
A Bluetooth Keyboard that’s More Portable and Durable Wed, 20 May 2015 22:19:47 +0000 mainPlugable’s Bluetooth Folding Keyboard just launched. It aspires to be the perfect portable keyboard for your phone.

The best possible Bluetooth keyboard would be one that’s compatible – a standard Bluetooth keyboard that also supports the special keys and codes of Windows, iOS, and Android systems. It would be portable so you could fold it down to the size of your phone and throw it in your bag. And it would be durable so you can take it on the road and not worry.

The new Plugable BT-KEY3 Bluetooth keyboard aims for the sweet spot of all these characteristics.

Bluetooth keyboards have been around a long time. But most are too bulky to travel with. Years ago, I had one that folded and worked well. But it was all plastic and frustrated everyone by breaking often. It had a AAA battery, which was fine for the time but not as clean and convenient as a built-in battery that simply charges via USB.


The Plugable Keyboard is elegantly engineered to strike a balance between these competing demands:

  • The back is strong but lightweight aluminum. The hinges are stainless steel. This makes it extremely durable
  • The hinges are extremely smooth, it’s a pleasure opening and closing the keyboard
  • It comes with a soft but strong case that both protects the keyboard and anything else you’d throw in a bag with it
  • The case transforms into stand for your phone or tablet and adjusts to any angle
  • The keyboard has special support for Android, Windows, and iOS keymappings
  • It has a USB-charged battery that lasts for weeks of normal use

We’re really excited about this new keyboard. Learn more here: Plugable BT-KEY3. And feel free to comment below with any questions at all. We hope you find it both exciting and useful too. Thanks for going out of your way for Plugable products!

Where to Buy

]]> 4
Using Easy Computer Sync to Transfer Data to a Second Drive on Your New Computer Mon, 18 May 2015 15:00:22 +0000 EasyTran

Many recent computers combine a small, speedy solid-state drive (SSD) for system files with a larger, slower hard disk for data files. This can pose a problem when migrating from an older computer using the Plugable Windows Transfer cable because the data files and operating system files should go to two different drives. In some cases, even without an SSD, people want to put their data on a second drive separate from their system drive.

These scenarios are not well supported in the Bravura Easy Computer Sync software supplied with the Plugable Windows Easy Transfer cable. By default, Easy Computer Sync tries to transfer all the data from the main drive on the old computer to the main drive on the new computer. If the main drive on the new computer can’t hold all that data, an error message will complain that there is not enough space on the destination drive. Otherwise the data will end up on the wrong drive.

In this post, we will look at a method for getting your data stored where you want it on the new computer. I will assume the common scenario where all the user files are transferred to the second drive on the new computer. I will use a Windows XP computer as the old one, and a Windows 8.1 computer for the new one, but this will work with any combination of supported operating systems.

With appropriate modifications, this same method can be used to select any collection of files or folders from any fixed disk on the old computer and send them to any location on any fixed disk on the new computer.

I’ll assume you have downloaded the Bravura Easy Computer Sync software, installed it on each computer, and entered your product key. If you haven’t, follow the Install Instructions section on this page.

Preparing the Old Computer

1. Plug the Windows Transfer cable into each computer and start the Bravura Easy Computer Sync software. You will see the Welcome window.

Easy Computer Sync Welcome Screen

2. On both computers, click Next twice until the screen says “Waiting for Connection.”

Waiting for Connection

3. In a moment, the on-screen message should change to “Connection Detected.” If it doesn’t change within a minute or so, temporarily disable any anti-virus or firewall software and try again. If this doesn’t work, contact Plugable support for help.

Connecton Detected

4. After the connection is detected, Easy Computer Sync will display the Tools window. This is where you select the type of transfer you want to do. Since we are transferring files from an old computer to a new one, select Transfer Data to New Computer. The Sync Files option is used when you want to send data back and forth between two computers on an ongoing basis. However, the technique mentioned here will work for the Sync Files function also. The Drag & Drop function is used for manually transferring individual files and folders.

Tools Window

5. Easy Computer Sync will display a list of user folders it wants to transfer to the main drive on your new computer. Although the user folders under the user name—such as My Documents and Pictures— are frequently accessed, the individual files they contain are infrequently accessed and should go to the second drive on your new computer. Unfortunately, the software will not allow you to change the transfer location in this window. If these folders are left in their currently selected state, they will go to the smaller SSD drive, which is the system drive (usually the C drive).

Remove Checks as Needed

6. Since you don’t want these user folders on your SSD, clear the check mark next to each folder here, including the Public folder. In another window, you will individually select the destination disk and folder on your new computer for each user folder you see here.

Check Marks Cleared

In the following Steps 8 through 17, we will select the destination for a single user folder on the old computer. This procedure must be repeated for each user folder you want to transfer from the Select Items to Transfer window. At the end of this post, I will show you how to make the newly transferred folders the default user folders on your new computer.

7. To make a gathering place for those folders on your new computer, start by creating a folder on the destination drive with your Windows user name. Do this before proceeding to the next step.

In this example, I want to transfer folders belonging to the user named “David” on the XP computer to a folder named “David” on the second drive of the new computer. So I use Windows Explorer to create a folder named “David” on that drive.

Create a folder named "David"

Setting up a folder for transfer to second drive of the new computer (repeat for each folder)

Do the Steps 8 through 17 for each user folder you want to transfer from those shown in the Select Items to Transfer window. You can also use this procedure to transfer other data folders on your old computer if you know where they are. In this example, I will transfer the My Documents and Desktop folders from the David user in the XP computer to folders I will create with the same names in the David folder on the second disk of the new computer. In real life, you will probably want to transfer all the folders shown under your user name in this window.

8. Click Add Folder.

Click Add Folder

9. This opens a new window that shows drives on the old computer and the new computer. You can expand a drive and reveal its folders by clicking the + mark next to it.

Click + to expand folders

10. Unfortunately, in this view Easy Computer Sync does not automatically show the user folders we want to transfer, such as Documents, Pictures, Desktop, Music, Videos, and the like. This is because they are located under Documents and Settings folder on the system disk in Windows XP or under Users in later Windows versions. Easy Computer Sync hides these folders to protect them. To display and transfer them, you have to make these protected folders visible. To do this, click Show Protected Folders at the lower-left.

Click Show Protected Folders

11. A confirmation window will appear. Click Yes.


12. Expand the local disk again by clicking the + symbol.

Expand Local Disk

13. The User folders are now visible. Expand the Documents and Settings folder in Windows XP or the Users folder in later Windows versions to view them. Expand the folder of the user you are transferring (David in this example.)

Expand Documents and Settings folder or Users folder

14. In the left-side panel, highlight the folder on the old computer (such as My Documents) that you want to transfer. With it still highlighted, go to the right-side panel and expand the drive you want to transfer that folder to, and highlight the user name folder you made on that drive in Step 7.

10 Click David

15. Use the Create Folder function to create a new folder for the contents of the folder you are transferring. The new folder will be created under the one highlighted in the right-side panel.

Create Documents Folder

16. Click Create Folder.

Type in folder name

17. A text entry box will appear. Type in the name of the folder and click OK.The new folder will be created in the Select Folders Window. Making sure both the source folder and the destination folder are highlighted, click OK in the Select Folders window to save this pair of folders.

12.5 After make docs folder

The Select Folders Window will close and you will be returned to the Select Items to Transfer window, where you should see a check mark next to the name of the folder you just added.

12.7 Select items to transfer after my docs

18. Use the same procedure to add each folder you want to transfer. The screen shot belows shows the Desktop folder being added.

Adding Desktop folder

19. When finished adding each new folder to be transferred, click OK in the Select Folders window. After the final folder has been selected, make sure each folder you selected has a check mark next to it in the Select Items to Transfer window, as shown below. Make sure any folders you don’t want to be transferred are not checked. Easy Computer Sync will remember these settings for future transfers between the two computers.

Make sure all folders to be transferred have checkmarks next to them.

20. When all is ready, click Next in the Select Items to Transfer window to start the transfer. The Transferring Files window will show a progress bar while the files are being transferred. This may take a long time for large amounts of data.

File transfer in progress

21. When the transfer is finished, the Finished window will be displayed, showing how many files were transferred.

Finished window

22. Click View Log to see a detailed view of which files were transferred. This is useful if Easy Computer Sync reports some files not transferred. The log opens in Notepad as a text file, and can be saved from there.

Transfer log

This completes the procedure for transferring the user folders to your new computer. If you want to make those newly transferred folders your default user folders, follow the procedure below.

Changing your default user folders to the ones you transferred

After transferring your user folders to the second drive on your new computer, you may wish to set those folders as the default user folders in place of the original ones on your SSD drive. Windows and many programs automatically select the default folders as the save location for files. For example Word saves documents in the Documents folder. Photo programs like Picasa save photos in your Pictures folder. Many music programs save music in your Music folder. You change the default separately for each user folder. Please note that this will permanently alter your computer’s setup.

1. Open Explore and navigate to the current default folder. Right-click and select Properties.

1 open props

2. In the Properties Window, select the Location tab.

Click Location

3. In the Location tab, edit the text box to show the location of the newly transferred folder. Clicking the Find Target… button will open another Explorer window where you can locate the target. Enter the location of that folder in the text box.

3 name changed

4. Click OK. You will be asked if you want to move your documents and other files from the current default folder to the new one. Click Yes here to consolidate all the files in one folder. A window will appear showing the transfer operation.

Move files

If you have any questions or issues following these instructions, please leave a comment on this post, or contact us at

]]> 0
DisplayLink Launches Support for Android 5.0 and Higher Docking Fri, 15 May 2015 18:33:14 +0000 For the first time, you can mirror the display of a supported Android device onto a monitor or TV through the device’s USB port, using a newly-released driver by display chip maker DisplayLink. The driver works for adapters or docking stations using DisplayLink’s chips that are attached to tablets and smartphones running Android version 5.0 (Lollipop) or higher. Using the driver may also give access to other docking station functions, such as Ethernet and sound if the Android device already supports those functions. All Plugable USB display adapters and docking stations use DisplayLink chips.

This is a beta-level software release, which means there are crashes and bugs to deal with in specific scenarios, but the potential is exciting. Here is an example of how everything works so far.

The screenshots above show the app in the Google Play Store and the main screen when the app is launched. So what happens when you plug a DisplayLink device into an Android device?

The screen on the left shows what happens when you connect a DisplayLink device (in this case one of our Plugable UD-3000 docks) to a Motorola Moto X Gen2 phone with a required USB OTG adapter. The app will prompt to allow the DisplayLink device to become active, and then warn that everything on screen will be captured and sent to the DisplayLink device. Once allowing the action, the magic happens:


So what all is working here?

  • A single mirrored display supporting up to 1920 x 1080 resolution
  • A USB hub with wired keyboard and mouse
  • The analog audio output (but not the input) of the dock itself

Once the DisplayLink device is in use, there are subtle indicators that it is working. In the notification pull-down there is an indicator that DisplayLink Desktop is running and the Cast Screen option has turned into a DisplayLink Desktop icon.

As this is a beta, I did run into some issues. While testing I received a text message and that caused the application to crash. I disconnected the phone from the dock and the OS was frozen and eventually caused a reboot. Again, this is an early beta and I’m sure with time these bugs will be worked out.

Disconnecting the phone from the UD-3000 and connecting it to our UD-PRO8 docking station yields even more interesting results. Not only can I make use of the additional display but there is also support for the wired Ethernet chipset since the ASIX Ethernet chip used in the dock is usually supported under Linux, which Android is based upon. Unfortunately the Moto X Gen 2 won’t pull a charge from the UD-PRO8 while connected so while docked the battery will discharge.

Though this is just an early beta there are a lot of exciting possibilities to this new product. With further development it may be possible to use an Android phone or tablet as a primary system. We’ll keep you up to date as the beta progresses, and we’d also love to hear about your experiences in the comments section!

Most Android devices do not have a full-sized USB “A” port, as a USB OTG adapter is required to connect any USB devices.

Hardware we have tested so far:

  • Motorola Moto X Gen 2 with Android Lollipop 5.0
  • Google Nexus 5 with Android Lollipop 5.1
  • Google Nexus 7 1st Gen (2012) with Andorid 5.1 (did NOT work)
  • Google Nexus 7 2nd Gen(2013) with Android 5.0

Plugable Devices Tested and recommended for testing:

While this is beta-level support, it’s exciting to have the potential to turn Android devices into desktop replacements with keyboard, mouse, a big screen, and other USB devices. If you put it to use, please let us know your results of testing with your own devices in the comments. We’d love to hear from you!

]]> 4
Plugable’s Line of USB-C Products Wed, 13 May 2015 20:01:49 +0000 Plugable's Gigabit Ethernet adapter plus 3-Port USB 3.0 Hub combo, Flash Memory Card Reader, Gigabit Ethernet adapter, and Passive USB-C Male to USB 3.0 Female cable

From top to bottom: Plugable’s Gigabit Ethernet adapter plus 3-Port USB 3.0 Hub combo, Flash Memory Card Reader, Gigabit Ethernet adapter, and Passive USB-C Male to USB 3.0 Female cable

With the announcement of the USB Type-C connector last year, everything changed.

A reversible, universal, multi-mode, charging, up to 10Gb/s data cable? Yes please! This blows any other connection standard out of the water. You can imagine a utopian future where all computers have standardized on USB type-C ports.

In recent months, we’re finally starting to see devices coming out with this new connector. On the leading edge so far are the Apple MacBook Retina 12″ 2015 and the Google Chromebook Pixel [2] 2015. The MacBook specifically only has a single USB-C port, which in the current non-utopian world means most of your current accessories become instantly obsolete. That’s where we come in.

We are excited to show off our new hub, card reader, and Ethernet adapters that all connect directly to your USB-C laptop or tablet. They all work with either the new Macbook or Chromebook Pixel. All three are based on our current USB 3.0 offerings, so compatibility is already well tested and understood. The first manufacturing batch is still in progress, but these will be on the market very soon.

The USB-C to female A cable is a new type of product. Through this cable, you can plug basically any USB device you own into your new USB-C equipped laptop. There are no active adapters involved here, and the cable is nice and short, so there shouldn’t be any signal interference or timing issues associated with active cables. Just a clean, well shielded, straight-through cable that allows you do adapt any USB 2.0 or USB 3.0 device you already own to that new laptop or tablet.

We fully intend to dive headfirst into the USB-C marketplace, so there will be much more to come. But for now, we are thrilled to reveal these first offerings. If you have any questions or feedback, we’d love to hear from you in the comments below.

]]> 6
Quick Fix for Problems Using Bluetooth and Blueman from the Raspberry Pi Raspbian Desktop Mon, 11 May 2015 23:20:28 +0000 pi_Blueman

Bluetooth and Raspberry Pi are a natural combination, allowing your Pi to communicate wirelessly with devices like our Bluetooth home automation switch. However, recent versions of Raspian have had permission issues that won’t let ordinary users open Blueman, the desktop Bluetooth program, without being root. Fortunately, the solution is easy: just add the current user to the bluetooth group. Here are the details:

The Problem

You install Raspbian on your Pi, boot up and log in as “pi” or another normal user. You install Blueman, the graphical interface to Bluetooth for the Pi, according to the instructions here, and plug in your Bluetooth adapter. You select Bluetooth Manager from Menu > Preferences and the icon appears on your desktop. But when you click on it, or right-click and select Setup New Device, the rotating “busy” symbol appears next to the cursor for moment, but the Blueman window fails to open.

The Solution

This happens because when Raspian installs Blueman and the other Bluetooth software, it does not automatically add ordinary users to the bluetooth group. This group gives users permissions to access D-Bus, which Bluetooth uses for communication in Raspian. This causes a Permission Denied error whenever a Bluetooth process initiated by the unprivileged user attempts to access Blueman.

However, adding the “pi” user to the bluetooth group causes a new problem: The next time you start the desktop, the taskbar at the top of the UI will flash on and off and not fully appear. This appears to be related to the UI changes that were instituted right around the time Blueman stopped working. The solution is easy, but it requires returning to the default LDXE desktop with the taskbar at the bottom. We will add the user that will use Bluetooth to the bluetooth group and remove the UI changes:

1. Open a terminal window.
2. Type the following at the prompt
sudo usermod -G bluetooth <username>

Replace <username> with your actual username, usually pi.

Add user to Bluetooth group

You can check it by typing:

cat /etc/group | grep bluetooth

You should see your username at the end of the group:


Check if current user is in Bluetooth group

Check if current user is in Bluetooth group

Return to the default UI with this commmand:
sudo apt-get remove raspberrypi-ui-mods

Type sudo reboot to restart your Pi, then log in again. You should now be able to access Bluetooth using Blueman, and your taskbar will be at the bottom and not the top. The UI will not be as pretty, but it will work.

I hope this guide is useful. If you have any other questions, please comment here or contact us at

]]> 2
Restoring Lost Bluetooth Icon to Your Windows System Tray Thu, 30 Apr 2015 15:39:54 +0000 Missing Bluetooth IconThe Bluetooth icon in the Windows system tray provides an easy way to connect and manage Bluetooth devices on your Windows 7, 8, or 8.1 computer, and many Bluetooth users rely on it. But an accidental click in the wrong place can cause you to lose that icon, leaving no obvious way to access Bluetooth settings. Here is how to restore it.

The Problem

When Bluetooth is activated in a Windows 7, 8, or 8.1 computer, Windows places a Bluetooth icon in the System Tray—the collection of easily accessible icons near the clock. It will either appear on the task bar or can be accessed by clicking the upward pointing triangle.
Clicking the Bluetooth icon displays a menu with entries for adding and managing Bluetooth devices. At the bottom of the menu, in a location that is easy to click by mistake, there is a Remove Icon entry. This removes the icon and closes the menu with no notification or confirmation. The next time you go to use Bluetooth, the icon is unexpectedly gone. With no icon or other indication that Bluetooth is available, it is easy to assume that Bluetooth is broken or no longer exists on the computer. It is difficult to understand why Microsoft included this, since icons in the System Tray can be easily hidden using the Customize link on the menu.

Although it is extremely easy to remove the icon by accident, Windows provides no easy way to restore it. Despite the importance of Bluetooth these days, especially to tablet users, Windows provides no Bluetooth control applet in the Control Panel. In Windows 8/8.1, a Bluetooth settings panel is available several levels deep from the Settings icon in the Charms menu, but like most Charms panels, its functionality is limited, and it includes no method to restore the Bluetooth icon.

Restoring the Icon

A detailed Bluetooth control applet does exist. Called Change Bluetooth Settings, it can be opened by searching for it in the Start menu. The procedure is slightly different in Windows 7 and in Windows 8/8.1, but once found, the icon is easy to restore.

Windows 7

1. Click the Start button.
Search Change Bluetooth Settings Windows 7
2. Type “change Bluetooth settings” in the Search Programs and Files box directly above the Start button.
3. “Change Bluetooth Settings” should appear in a list of search results as you type. Click it to open the Bluetooth Settings window shown below.
Change Bluetooth Icon in Windows 7
4. Under the Options tab, place a check in the box next to Show the Bluetooth icon in the notification area.
5. Click OK and restart Windows. The icon should reappear the next time you log in.

Window 8/8.1

1. Right-click the Start Button.
2. Select Search
Search Change Bluetooth Settings Windows 8
3. Making sure Everywhere is selected, type “change Bluetooth settings.”
4. “Change Bluetooth Settings” should appear in a list of search results as you type. Click it to open the Bluetooth Settings window shown below.
Change Bluetooth Icon Windows 8
5. Under the Options tab, place a check in the box next to Show the Bluetooth icon in the notification area.
6. Click OK and restart Windows. The icon should reappear the next time you log in.

]]> 0
USB Charging Past, Present, and Future – Type-C Mon, 27 Apr 2015 16:00:24 +0000 Last October we wrote about choosing between a dedicated USB charger or a charge and sync USB hub. At the time USB charging could be quite frustrating for consumers as device and charger incompatibilities were rampant. Fortunately dedicated smart chargers, charge and sync compliant hubs, and charge and sync compliant devices are now far more common and USB charging has become more plug and play than it ever used to be. The era of fully standardized USB is upon us. USB Type-C.



For many of us, just going back a few years was a dark time in USB charging. It seemed everyone had some device that would only charge from its stock charger and nothing else, or you could sync data to your computer but you couldn’t charge from it. When the Apple iPad was released many found themselves in this situation. Fortunately with its mass popularity 3rd party chargers were quickly developed to emulate the Apple charging signals which went on to become the unofficial universal standard for many other devices. This was great for consumers who didn’t want to buy expensive stock chargers but it still didn’t solve the charge and sync problem. Often syncing while charging either isn’t possible or is extremely slow. Eventually the USB Implementers’ Forum (USB-IF) designed a standard to resolve this issue, Battery Charging 1.2, and slowly it has been adopted into most modern devices, including later generations of the iPhone and iPad.


Today several chipset manufacturers make smart chipsets that try to intelligently detect what device you are using and emulate the best charging signal for that device. Most major phones and tablets are supported from Apple iOS, Android, and Windows Mobile based devices from many different manufacturers. For 2015 we have introduced a whole new line of dedicated smart chargers and an update to our bestselling USB 3.0 hub to be BC 1.2 compliant ensuring almost any USB device will charge at the fastest rate possible.


The future for USB charging appears bright with the introduction of USB Type-C, a new standardized universal connector that will hopefully become commonplace on future devices from cell phones to laptops. Currently there aren’t many USB Type-C devices on the market but the two that we’ve been testing have interchangeable power adapters thanks to cross-compatibility of the USB-IF Power Delivery standard.

Our testing of the cross-compatibility yielded some interesting results and there are some limitations to this cross-compatibility that need to be addressed. The most important information is knowing that the USB-IF Power Delivery standard has different power profiles and not every power adapter will support them all:


  • The Apple MacBook 12″ with USB Type-C ships with a 29W power adapter. Looking at the specs written on the adapter it supports two power profiles of 5.2V at 2.4A and 14.5V at 2A (neither of which is a standard PD profile in the chart above)
  • The Google Chromebook Pixel [2] 2015 with USB Type-C ships with a 60W power adapter. It’s specs show support of 5V, 12V, and 20V at 3A (amperage for 5V and 12V is not labeled but we’re assuming for now that it supports 5V at 2A and 12V at both 1.5A and 3A as shown in the chart above)


It’s clear from our testing that the MacBook will accept charging signals from a PD power adapter like the Chromebook’s, but it is also clear that the MacBook adapter isn’t quite following the PD standard, rather something custom from Apple. We also found the MacBook does not charge any faster or slower using the Chromebook’s more powerful power adapter.

What is perhaps more interesting though is that the Chromebook Pixel [2] 2015  charges at all (albeit at a slower rate and a warning message “Low power charger connected – Your Chromebook may not charge while it is turned on”) when using the MacBook power adapter. This may suggest that the MacBook power adapter is indeed capable of following at least some PD profiles but that they just aren’t labeled on the power adapter.

While this is just an early look at USB Type-C and the USB-IF Power Delivery standard, later this year we expect to see several new systems sporting USB Type-C and we’ll be sure to keep you updated on the charging situation. We hope that cross-compatibility between devices and chargers will continue and only become more universal.

]]> 2
Selecting the Right USB-Ethernet Adapter for your Computer and Network Tue, 21 Apr 2015 16:11:31 +0000 PlugableUSBEtherentAdaptersEditSmall
You’re a consultant, and you stride into a new client’s office with your Macbook Air, only to discover they’ve disabled WiFi for security reasons.

You pull out your Windows tablet in a hotel room, but the only internet available is coming through a wire on the desk.

You’re a gamer and you’re tired of watching helplessly as your frozen character dies of WIFI-induced lag.

You plug your computer into your brand new Gigabit fiber optic connection, and it’s no faster than before.

A USB to Ethernet adapter can be the answer in each of these scenarios. All of them can add an Ethernet port to a supported computer that lacks one. Some offer speeds far faster than a typical wireless connection or an older network card. A wired connection is also more stable, more reliable, and more secure than a wireless connection.

Plugable offers five USB-Ethernet adapters to accommodate your needs, including the USB2-E100, USB2-E1000, and USB3-E1000. There is also the USB3-HUB3ME that combines a USB 3.0 four-port hub with the chipset of the USB3-E1000, and the USB2-OTGE100, which is electrically identical to the USB2-E100, but features a micro-USB connector especially suited for tablets and smartphones that don’t have a standard full-size USB port.

Which one is right for you? Which will work with your device? How can you get the highest speeds without wasting money on unneeded capacity or buying something that doesn’t work with your computer? To make a good decision, you can think about the 3 C’s: Compatibility, Capacity, and Cost.


The table below gives an overview of the different Plugable USB-Ethernet adapters and their compatibility with different operating systems. Please note that even if a device is compatible with a given computer, it may need to be configured to work on a particular network, especially in corporate or institutional settings like hospitals or universities. Please consult your IT staff for details. When plugging directly into a cable or DSL modem on a home network, it may be necessary to disconnect power from the modem for 30 seconds, then plug it back in again to make it accept the new device.


All Plugable adapters can be used with all Windows computers with Windows XP and later and at least one USB port. However, while USB 3.0 adapters will work in a USB 2.0 port, they will not reach their full speed potential unless plugged into a USB 3.0 port. Also, computers with USB 3.0 ports several years old may need a driver upgrade to work properly.

The USB2-OTGE100 is especially suited to the many recently-introduced Windows tablets that only have micro-USB ports. While electrically identical to the USB2-E100, its male micro-USB connector eliminates the need for an On-The-Go (OTG) cable. However, because recent Windows tablets contain a full-featured Windows 8.1 operating system, they are fully compatible with any Plugable USB-Ethernet adapter provided an OTG cable is available, and are capable of higher speeds on a Gigabit network if a Plugable Gigabit USB adapter like the USB2-E1000 or USB3-E1000 is used.

Built-in drivers are available for some adapters in Windows 8 and later. If no driver is present, Windows will download the drivers automatically if the computer is connected to the Internet. If no connection is available, for example, because you are connecting a WIFI-only computer in a location with no WIFI, you can install drivers from the included CD disk, or download them to another computer from the Plugable website, copy them to a flash drive, plug it your computer, and install from there.


OS X versions from OS X 10.6 (Snow Leopard) to OS X 10.10 (Yosemite) should already contain drivers that are compatible with all Plugable USB-Ethernet adapters. However, if for some reason the drivers are missing, they can be easily downloaded from the Plugable website. Unfortunately, these USB-Ethernet adapters are not compatible with Apple devices like the iPhone or iPad that use iOS.


Chromebook computers already have the necessary drivers install for all Plugable USB-Ethernet adapters and should work out of the box.


In Linux systems, support for the different chipsets in Plugable USB-Ethernet adapters depends on the kernel version, as shown in the table above. However, expert Linux users can add support to earlier versions by rebuilding the kernel module from the source code. You can find your kernel version by opening a terminal window and typing uname -r.


While the chipsets in several Plugable USB-Ethernet adapters are supported in Android itself after version 4.0, they will actually work only if the maker of the phone or tablet has installed the necessary drivers. If the drivers are not already included by the maker, installing after the fact is extremely difficult and requires professional-level expertise with Android.

On the product pages for the USB2-E100, the USB2-OTGE100, and the USB2-E1000, there is a list of known compatible and non-compatible devices. The USB2-E100 and USB2-E1000 require OTG cables to connect. USB 3.0 devices are not supported for Android at this time.

If your device is not on the list and you’ve tested it with one of our adapters, email us at or leave a comment below. We’ll add it to the list.


Sadly, iPhones, iPads, and other Apple mobile devices using iOS do not support any Plugable USB-Ethernet devices at present.

Capacity and Cost

Everyone wants the fastest possible network access, whether for connecting to the internet or downloading files from an office server. But there’s no point spending money on capacity you can’t use. For example, if you are accessing the internet through a cable connection that promises a maximum 25 Megabits per second (Mbps), there is no reason to invest the extra money to buy a USB3-E1000 adapter that can reach speeds 40 or 50 times faster. Our USB2-E100, with its 95Mbps maximum speed, would be a better choice. Getting a faster adapter won’t make your network faster if its speed is limited by your internet connection or other hardware on the network.

The speed at which data can be transferred over a network depends on a lot of variables, and the final speed will only be as fast as the slowest thing affecting it. To get the most speed possible, be sure your router, cables, and any switches or hubs are also designed for the speed you are hoping for. If there are many computers connected ot your network or if any connected computer has a virus or trojan, this will also degrade speed.

For the purposes of selecting the right adapter for your situation, you’ll want to select an adapter that exceeds the maximum speed of your network, while taking into consideration any likely future improvements. Network speeds are usually measured in Megabits (one million bits) per second (Mbps). Be careful not to confuse Megabits with the Megabytes commonly used to measure file sizes and hard drive speeds. A byte is made up of 8 bytes, so it would take more than 8 seconds to download an 100 Megabyte file at 100 Megabits per second.

Generally for a home network, the most important consideration is the speed you have contracted for with your Internet service provider (ISP). Contact them if you aren’t sure. Speeds of 10-50 Mbps are common, but recently download speeds in excess of 1 Gigabit per second (1000 Mbps) have become available in some areas. In an office setting where you might be spending a lot of time communicating with another server on the same local network, the maximum speed of the local network hardware and the server you are accessing might be the most important consideration. Some offices also have fiber optic access to the internet at 1000Mbps or higher.

I hope this guide is useful. If you have any other questions, please comment here or contact us at

]]> 2
Try Pi WiFi: Using the Plugable USB WiFi Adapter with Your Raspberry Pi 2 Tue, 14 Apr 2015 22:41:42 +0000 2-PiJust about everyone who has rode the Raspberry Pi wave in the last two years is excited about the possibilities presented by the new Raspberry Pi 2. With six times the processing power, it has the potential to turn a fun computer for experiments and hacks into a respectable piece of hardware that could almost replace a desktop computer.

But what if your desk is far from an Ethernet connection? What if you want to use the Pi in an isolated place to control security cameras or home automation? You’ll probably need a WIFI connection, but the Raspberry Pi doesn’t come with this capability built in.

The Plugable USB-WIFINT is useful in this situation. All you have to do is plug it in, set your network ID and password, and you are ready to roll. The same procedure should work with any WIFI adapter that uses the same Realtek RTL8188CUS chip set. It also works with any Raspberry Pi model updated to the latest version of Raspian. To get started, you’ll need your SSID (wireless network name) and the password for the network.

Connecting from the Graphical Interface

1. If you already have an internet connection, make sure your Pi is up-to-date by issuing the following commands:

sudo apt-get update
sudo apt-get upgrade

Accept any updates and wait for them to be installed. If you don’t have an internet connection, you can do this after the adapter is installed.

2. Plug in your Plugable USB-WIFINT adapter. The red light on it should blink for a few moments.

3. On the desktop, click Menu > Preferences > WiFi Configuration.


A window titled wpa_gui will appear on the screen. You should see wlan0 in the Adapter field.


4. Click the Scan button to scan for wireless networks in your vicinity.


5. Double-click on your network in the scan results. A configuration screen for your network should open. Enter your network key into the PSK field.


6. Click Add, then click Connect on the main window. You should now be connected to WIFI.


If you have any questions at all, please comment below or email We’re happy to help!

]]> 10
Plugable’s New USB 3.0 Graphics Adapter for 2K and 4K HDMI Displays Tue, 07 Apr 2015 21:52:55 +0000 Multimedia background. 4k resolution conceptWhen Plugable was the first company in the world to launch a 4K-capable USB 3.0 DisplayPort graphics adapter last fall, we heard from users around the globe who were excited to be able to add Ultra-High-Definition displays to their systems via USB. As popular as our UGA-4KDP DisplayPort adapter has been, we’ve also heard from many who have been awaiting the release of our HDMI version of the adapter. Today we’re happy to announce that the wait is over! The Plugable USB 3.0 4K HDMI graphics adapter (UGA-4KHDMI) is now available. (Well, the wait is over for US customers. We expect the adapters to be available in non-US geographies soon).

The USB 3.0 4K HDMI graphics adapter is powered by the same DisplayLink DL-5500 chipset as our UGA-4KDP, and the performance specifications are identical, with support for displays up to 3840×2160@30Hz and 2560×1600/2560×1440@60Hz. Being designed for 4K it can of course render pixels with even better performance with 1080p (1920×1080) and smaller resolutions. The DL-5500 chipset was designed to have a DisplayPort output, so to provide an HDMI port our new UGA-4KHDMI has an integrated active DisplayPort to HDMI conversion chip matched to the full capabilities of the DisplayLink chip.

As a market leader in USB graphics technology, Plugable set out to create a unified design for our three flagship graphics adapters; a design that would look great while also focusing on improved reliability over previous USB 3.0 graphics adapters. We love all things USB here at Plugable, but we’ve found that the modular USB 3.0 “Micro-B” connection standard can be more fragile than we’d like. When designing our new adapters, we upgraded to a robust, built-in USB 3.0 cable and eliminated the USB 3.0 Micro-B connector.

The Plugable USB 3.0 4K HDMI adapter is the newest member of our redesigned family of USB 3.0 graphics adapters. Like its siblings the UGA-3000 and the UGA-4KDP, our new 4K HDMI adapter shares the same aesthetically-pleasing, clean design and solid built-in 12″ (30cm) USB 3.0 cable. If you’re unsure which adapter will best fit your needs, please reference the comparison chart below, or feel free to reach out via email to and we’ll be happy to help!

2015 Plugable USB 3.0 Graphics Adapter Comparison

Where to buy

]]> 0
Massive New 97-Port USB Type-C Hub Shows Plugable’s Engineering Prowess Wed, 01 Apr 2015 14:04:26 +0000 USB’s new Type C connector is amazing. It’s reversible (like the Apple Lightning cable), it replaces both the iconic USB Type A connector and Type B, it’s forward and backward compatible with USB 3.1 down to USB 1.1, capable of 10 Gbps (more in the future), and can deliver up to 100W in either direction by negotiation through USB Power Delivery.

But if your Macbook or tablet only has one or two Type C ports, that just ain’t enough to enjoy all this goodness. You need more ports. And Plugable has delivered.

Introducing the Plugable USB3C-97XXX 97-Port USB 3.0 Type-C Hub.

Available now in Zaire, South Africa, Zimbabwe and Guam. US, Canada, UK, and Japan sales pending certifications.

-97-port USB 3.0 Type-C self powered hub
-Revolutionary 400W (5V, 80A) UL certified power adapter (quad US AC wall outlet plugs, 100-240V 50/60Hz)
-Consumes only 27W when idle, that means there’s about 93.25% left for all your dozens of USB devices
-Compatible with computers that have strong USB 3.0 xHCI host controllers and USB Type-C connectors
-Fully plug and play so you can plug those devices like they’re hot
-Features thirty two VIA VL812 Rev B2 hub chipsets with the latest v9095 firmware for maximum compatibility and performance with controllers up to the task
-Sleek piano black glossy UV clear coat finish with all 97 ports on one side to minimize cable clutter and maximize accessibility
-USB interconnect cables configured in triple twisted pairs to ensure maximum EMI protection
-Individual LED status indicators for each USB port (LEDs may blink randomly on computers with weak host controllers and may cause seizures)
-Backward, forward, and sideways compatible with all USB hosts and devices
-Minimal packaging tape and Velcro straps used in construction making it highly recyclable

Other products seen in this video:

Questions or comments please reply below, we’re more than happy to help.

]]> 9
DisplayLink Releases Updated Mac Driver (v2.4 Beta 1) Mon, 16 Mar 2015 15:14:35 +0000 We regularly receive inquires from Mac users who are looking for updates regarding the compatibility of our DisplayLink-based USB docking stations and graphics adapters with Mac OS X. Progress has been slow, and there hasn’t been much substantive news to report in quite some time — until today.

imagesSome brief background for those unfamiliar with the situation: With the release of OS X 10.9 (Mavericks) almost a year and a half ago, we were disappointed to find that we could no longer recommend our DisplayLink-based docking stations and USB graphics adapters to Mac users running OS X 10.9 due to various issues introduced with the OS update. The regressions affected DisplayLink and other external display solutions (e.g. Thunderbolt to HDMI and DVI adapters; using iPads as extra displays). The 10.10 (Yosemite) update did not improve the behavior for DisplayLink devices.

Each new DisplayLink driver revision since the release of Mavericks has contained incremental improvements, though working around some of the key OS/API issues at the driver level has been a slow process.

Today DisplayLink has released their Version 2.4 Beta 1 driver for the Mac OS, and this release is the first to make significant progress on some of the core issues that have been consistently present since the release of OS X 10.9.

The following are some of the most notable fixes in this release:

  • Display layout and positioning are now preserved after system reboot and sleep/wake
  • Portrait/landscape rotation orientation is correctly applied
  • Hot-plugging or unplugging a DisplayLink adapter is much less likely to cause undesirable system behavior
  • WindowServer crashing (which causes a spontaneous log-out from the OS) has been reduced
  • CPU utilization of DisplayLinkManager has been reduced in some scenarios
  • Constant OpenGL error logging in Console experienced by some users should be resolved

As excited as we are regarding the improvements above, caution is warranted as well. DisplayLink has had to work around existing OS X issues in this new driver release, which could make support fragile as OS X updates come out.

Some of the known-issues that still persist with this new driver are:

  • Some users will experience intermittent graphical corruption/distortion
  • WindowServer crashes/spontaneous log-out issues are still present in some scenarios
  • Higher than expected CPU utilization from DisplayLinkManager/WindowServer in some scenarios

Because this is a beta driver and because of the remaining Mac external display issues, we still can’t recommend our USB graphics products for use on Mac. But we’re quite glad to see this progress.

Download DisplayLink’s version 2.4 Beta 1 driver for the Mac OS here.

Comments are welcome below, though we also recommend posting your experiences in the DisplayLink Mac Forum so that DisplayLink has visibility to as much user feedback as possible.

]]> 16
Hands-On With USB-C on the Chromebook Pixel 2 Sat, 14 Mar 2015 16:15:48 +0000 This has been an exciting week with the launch of Apple’s new Macbook with a single USB-C port (technically USB 3.1 Gen1 Type-C). Then just a day later Google announces their Google Chromebook Pixel 2 2015 — shipping immediately with several useful USB-C accessories.

So we had to get our hands on one and show the power of USB-C. A few of the breakthrough aspects of the new USB-C port:

    • Capable of delivering data and power with direction being negotiated (a dock could power a laptop, or a laptop power a dock).
One Bus to rule them all, One Descriptor to find them,
One Receptacle to bring them all and in either orientation bind them

– Our geek spin on Tolkien
  • Power up to 100W – devices start at 5V but can negotiate up to 12 or 20V at 5A. The Chromebook Pixel’s supply is 12V 3A (60W), and because this is now standardized, it should be able to power any device (including the new Macbook) … well, in theory.
  • Devices can negotiate to repurpose half the data lines as an “Alternate Mode”, with a native DisplayPort video channel defined by VESA being on of the first Alternate Modes defined. The Chromebook Pixel appears to support this, which is how it implements its USB-C to HDMI adapter.
  • The USB-C port is very small, thin, but strong. Pins are mirrored on either side of the port and hardware detects and corrects for orientation, so devices can be plugged in either way and work the same.

Dozens of companies and of course Intel were involved with the definition of USB-C. But the surprise here is Apple. Historically, they’ve intentionally created proprietary connectors or re-purposed standards in non-standard ways. But with USB-C, we’re seeing a serious Good Guy Apple moment. They contributed significantly to the USB-C connector, from supporting either orientation (like the proprietary Lightning connector) to making sure USB-C could be a functional superset of every bus that’s gone before. It’s a huge credit to Apple that they saw the potential for a single bus that could be standard across every device – Mac, iOS, Windows, Android, whatever.

Enough talking. Let’s see it in action.

And here are some of our devices that we show in the video that work with the Chromebook Pixel 2.

]]> 1
Chromebooks Gaining USB Multiple Monitor Support Fri, 13 Mar 2015 15:07:22 +0000 2015-03-12-DisplayLink-ChromeOS

Chrome OS has begun the process of supporting DisplayLink USB 2.0 devices, which will eventually enable USB docking stations and graphics adapters for Chromebook systems.

For the moment, there are still lots of limitations including mice cursors not working and EDIDs getting lost. But with time and attention, this could become one more area where Chrome OS closes the productivity gap with other systems.

So what’s new? You may have seen the addition of ozone and fre(c)on (we have seen this called out as “+frecon” or “freon”) in the most recent ChromeOS builds. What does this mean? According to Google’s own François Beaufort.

This project is about removing X11 dependency and add hardware overlay support in order to provide better performance/reduced power consumption for WebGL and video and reduce Chrome OS binary size

With this switch, Google has also been able to take advantage of the DisplayLink USB 2.0 DRM/KMS driver that’s been in the Linux kernel for several years and begin work on some much needed configuration support. While still not complete enough for normal use, this work may eventually translate into DisplayLink functionality for Chrome OS in the near future (there is no official announced release date yet).

Display adapters which will work in this scenario are USB 2.0 based and feature the DisplayLink DL-1×5 family of chips since they are backed by open source drivers.
Plugable Universal Docking Station
Our devices which fall into this category include our:

We have done some preliminary testing using the ASUS Chromebox CN60. To be able to get newest build and fre(c)on/ozone bits, we had to switch over to the dev channel. We ended up with the following build:


The CN60 already has a built in DisplayPort and HDMI video port, but we wanted to push the envelope and add yet another monitor via our UGA-165. So, what was the result?

At first we just used two monitors overall, one plugged into the native DP- or HDMI port and the additional one into the USB grpahics adapter. Boom, we had instantly gained an extended monitor and gazed at all the pretty pixels.

Unfortunately we also came across our first few bugs. The mouse cursor was not visible on the UGA-165 connected extended display (same result for the USB-VGA-165 and UGA-2K-A). I could still move around and bring up menus on the extended display. I attempted to turn off mouse acceleration via the xset m command to fix this problem (a trick often mentioned by the Chrome OS and Linux community) but CROSH (the equivalent of command prompt for Chrome OS) just did not want to take to my commands, so I gave up. As an alternative means, I enabled “Show large mouse cursor” in Settings, and was able to utilize an over sized mouse cursor on both monitors (IN YOUR FACE CROSH!).

The most lamentable fact was, attaching more than two displays (no matter what the combination, either built-in video port or display adapter) would bring the system to its knees. All we saw was black screens and a complete system lock up. If the third display was quickly disconnected, the system recovered however. The ultimate fix was to remove the third monitor and to reboot the system which brought everything back to life.

We are excited to see this feature enabled in the dev channel and are anticipating the official arrival of ozone/fre(c)on in the stable channel once all the bugs have been ironed out.

With this support beginning to roll out to more Chromebooks, it’d be great if you could take the time to report them to Google to help improve support for this scenario.

And your experiences help other Chromebook users trying the same things. Feel free to comment below. Thanks!

]]> 8
Setting up a New Hard Drive or SSD in Your Plugable Docking Station Thu, 12 Mar 2015 22:53:58 +0000 Customers often ask us why their new blank hard disk drive (HDD) or solid state disk (SSD) doesn’t show up on their computer, ready to blame their Plugable docking station. Most often the drive just needs to be initialized, partitioned, and formatted. In this post we present a step-by-step guide for doing this.


Initializing prepares the drive to be used by the computer, partitioning sets aside specific areas of the disk for data, and formatting sets up the framework the computer uses to store that data. We’ll cover the most common scenarios we run into, starting with Windows and finishing with Mac OS X instructions. The following steps apply to our USB3-SATA-UASP1, USB3-SATA-U3, and our entire Plugable Storage System lineup. They also apply to new hard disks that are installed inside your computer and potentially other docking stations/enclosures/adapters. We’ll be using a 4TB hard drive as our example.

If you are trying to access existing data or attempting data recovery on your hard drive and are encountering issues, please see this post here.

Before we get started, a brief word of caution is essential. Initializing and formatting a hard drive will erase *all* information on that drive. In the case of a new drive, that’s not a matter for concern—it does not have any data on it yet to worry about. However, if there are other drives in use on your system, it’s absolutely critical to pay close attention that you don’t erase the wrong drive. If you have multiple external hard drives connected we recommend disconnecting them all prior to initializing your new drive as well, just as a precaution.

If you wish to skip to our quick instructions without the extended walk-through information click here.


For Windows XP, Vista, 7, 8/8.1, and Windows 10, the experience is basically the same, and we’ll focus on using the Windows Disk Management Console. This console shows all of the drives connected to the computer and information about how they are currently configured. It lets you create partitions on your new blank hard drive so Windows can make use of it for data storage and recognize it as a drive letter in Windows Explorer.

The quickest way to open the Disk Management Console in any Windows version is to press the Windows and R keys together on your keyboard to open the Run dialog box:


Once open, type diskmgmt.msc and press Enter (make sure you are logged in as an Administrator or the program may not run):


When the application opens, the Disk Management Console should automatically detect a new non-initialized drive and display a pop-up window asking if you’d like to initialize it:

Click to enlarge

Click to enlarge

If no pop-up appears, take a look at the console. Each disk Windows recognizes is given a number and a horizontal bar representing the capacity of the disk and any partitions that exist. The new drive you are looking for should be listed as “Not Initialized.” Right-click on that drive and select “Initialize Disk”:

Click to enlarge

Click to enlarge

In either case it is extremely important when using Disk Management to make sure that you are working with the correct hard drive. The last thing you want is to accidentally delete important data!

There will be two options to initialize the drive: Master Boot Record (MBR) or GUID Partition Table (GPT). MBR is the older legacy method of initializing drives, and is only necessary if you need to access the drive on a Windows XP system (XP cannot recognize drivers initialized with GPT). GPT *must* be selected for drives over 2TB in size. If MBR is selected on a drive larger than 2TB, you will only be able to access the first 2TB of the drive, regardless of the drive’s capacity. GPT disks should be accessible to Windows systems running Vista and later:


(If you’re interested in more information about MBR vs. GPT, Microsoft has a very thorough post here.)

Once you’ve made your selection and clicked OK to initialize the drive, it’s time to partition and format. You can create multiple partitions if you want, but this guide assumes that you, like most people, want to access the entire drive through a single drive letter/partition. As mentioned earlier, each disk that Windows recognizes is given a number and a horizontal bar representing the space of the disk and any partitions that exist. Since we’re working with a drive that contains no partitions yet, it should be listed as “Unallocated” space. It’s a good idea at this point to make sure the drive size is what you expect it to be. In the following example, we’re working with a 4TB GPT initialized drive, which Windows reports as 3725.90 GB (Windows computes disk size differently than disk manufacturers, hence the difference):

Click to enlarge

Click to enlarge

Right-click the unallocated space, and select “New Simple Volume”:

Click to enlarge

Click to enlarge

After clicking “New Simple Volume” you will be guided through a series of steps. For the vast majority of users, just accepting the defaults and clicking Next will be fine. The two items you may wish to change are Assign the following drive letter if you’d like your drive to have a specific letter assigned, and Volume label, which will be the name you will see associated with the drive letter in Windows File Explorer:






After clicking Finish in Disk Management you will see the drive partition being formatted:


Once the format is complete the partition will have a drive letter and be accessible in Windows Explorer:

Click to enlarge

Click to enlarge

Note: If you had manually selected to initialize the drive as MBR and not GPT, or if you are using Windows XP and the drive is larger than 2TB, the drive will be split into two sections and only the first section of 2TB will be usable. Our 4TB drive when initialized as MBR is reported as two sections of 2048.00 GB and 1678.02 GB. A volume cannot be created for the second section, the option is grayed out:

Click to enlarge

Click to enlarge

Mac OS X

For Mac OS X we’ll be focusing on the Disk Utility. The Disk Utility is much like the Disk Management Console in Windows with many similar elements. The biggest difference is the type of partitions available and selecting the best partition scheme.

When a new blank hard drive or SSD is attached to a Mac system, you should see a dialog box automatically pop-up asking what you would like to do. If it does not, the Disk Utility can be found within the Utilities folder (found inside the Applications folder). If you’re sure that erasing any data on the drive is OK, go ahead and click “Initialize…” to open the Disk Utility:

Screen Shot 2015-02-26 at 2.54.10 PM

Once Disk Utility is open you will see a list of drives attached to the system to the left of the window. It should be fairly easy to identify the drive you want to initialize as the drive size and model number will usually be present. For this example we’re using the “4 TB HGST HDS 724040…” hard drive:

Click to enlarge

Click to enlarge

After selecting the drive you wish to initialize you will be presented with several options. Click on the “Partition” tab:

Click to enlarge

Click to enlarge

Now click on “Options” to select the partition scheme for the drive:

Screen Shot 2015-02-26 at 2.55.04 PM

Here we have options for GUID (GPT) and MBR, but we’re also presented with Apple Partition Map. MBR is the older legacy method of initializing drives, and is only necessary if you need to access the drive on a Windows XP system (XP is incompatible with GPT and Apple Partition Map). Apple Partition Map is also an older legacy method of initializing drives, and is only necessary if you need to use the drive as a start up disk on a PowerPC-based Mac. Because our example hard drive is greater than 2TB Apple does not give us the option to select MBR, only GUID (GPT) and Apple Partition map. We recommend GUID for most users.

After clicking “OK” we now need to partition the drive. You can create multiple partitions if you want, but this guide assumes that you, like most people, want to access the entire drive through a single partition. Click on the “Partition Layout” drop-down menu and select “1 Partition”. You may give the partition a name you will see associated with the drive in Finder, we chose to leave ours as the default “Untitled 1″:

Click to enlarge

Click to enlarge

Now we need to select what format (filesystem) to use. If you are solely a Mac user, the best option is “Mac OS Extended (Journaled). If you need to use the drive with older Windows XP based computers you will want to select “MS-DOS (FAT)” but please take note that the maximum file size this format supports is 4GB which is problematic for larger files like HD video. If you want to use this drive between Windows Vista and newer computers, the best filesystem is ExFAT. For this example we’re going to select ExFAT since in our office we use a mixture of Mac and newer Windows systems:

Click to enlarge

Click to enlarge

After selecting the format and clicking “Apply” you will be presented with a confirmation dialog:

Screen Shot 2015-02-26 at 2.56.44 PM

After clicking “Partition” Disk Utility will format the drive:

Click to enlarge

Click to enlarge

After the formatting process is complete the Disk Utility will show the hard drive size and model and there will now be an entry for your formatted drive partition and the drive should automatically mount and be visible in the Finder (we have the Finder set to show mounted drives on our desktop, this is not enabled by default on newer versions of OS X):

Click to enlarge

If you have any questions at all, please comment below or email We’re happy to help!

Quick Instructions


  1. Logged in as an administrator, open the Windows Disk Management Console by pressing Windows + R to open the Run dialog box. Type diskmgmt.msc and press Enter.
  2. The Disk Management Console should automatically detect a new non-initialized drive ask to initialize it.
  3. Select either Master Boot Record (MBR) or GUID Partition Table (GPT). Click OK.
  4. Right-click on the unallocated space, and select “New Simple Volume”.
  5. After clicking “New Simple Volume” complete the “New Simple Volume Wizard” to format and assign a drive letter.


  1. Open the Disk Utility located within the Utilities folder inside of Applications.
  2. Select the drive to initialize on the left.
  3. Click on the partition tab.
  4. Click Options and select the partition scheme. GUID (GPT), Apple Partition Map, or Master Boot Record (MBR). Click OK.
  5. Choose the partition layout, enter the desired size of the partition(s), and rename the partition(s) if desired.
  6. Choose the drive format. Mac OS Extended (Journaled), Mac OS Extended (Case-sensitive, Journaled), MS-DOS (FAT), or ExFAT and click “Apply”.
]]> 0
Recovering Existing Hard Drive or SSD Data in Your Plugable Docking Station Thu, 12 Mar 2015 22:48:31 +0000 It’s happened to almost all of us at one point, your computer or external hard drive fails and panic sets in. Perhaps your files haven’t been backed up yet or this drive is the only backup. One way or another, you made it to us and bought one of our docking stations. Now what do you do?


Because one of the most common reasons for buying a Plugable hard drive docking station is to recover data off of a SATA hard drive from another computer or external hard drive enclosure we wanted to talk about some issues our customers frequently experience. The following steps apply to our USB3-SATA-UASP1, USB3-SATA-U3, and our entire Plugable Storage System lineup. They also apply to hard disks that are installed inside your computer and potentially other docking stations/enclosures/adapters.

The most important thing to keep in mind is that data recovery is often best left to trained technicians and anything you do to recover data on your own could make recovering the data impossible, even for a data recovery specialist.

If you are trying to set up a new blank hard drive and are encountering issues, please see this post here.

Internal Hard Drives

Our lay-flat and vertical docking stations are quite useful for recovering data from a desktop or laptop computer because they support both 2.5″ and 3.5″ SATA hard disk drives (HDD) and solid state drives (SSD). If you’re able to remove the drive from the computer to insert into our dock, you’re on your way to accessing the data. With that being said there are always scenarios where this may not be true. There are many factors that can cause data to be inaccessible. Assuming for the moment that the hard drive in question hasn’t failed completely and is not part of a RAID array, chances are our dock should be able to help access data off the drive.

Here are some common trouble scenarios for recovering data from an internal drive in our dock:

  • Complete drive failure. This is fairly self explanatory, the drive itself has mechanically or electronically failed causing the drive to not be detected by our dock.
  • Pending drive failure. HDDs and SSDs often fail slowly, most commonly encountering what is known as bad sectors. This can lead to data corruption making data recovery extremely difficult or impossible. Other factors can also be present but are usually less likely such as intermittent electronics on the circuit board, failing drive bearings, etc.
  • Partition / filesystem damage from improper shutdowns, viruses, etc.
  • Incompatible filesystem(s) with the host data recovery computer. For example, Windows systems cannot natively access data from Mac or Linux/Unix formatted drives, we’ll touch more on this later.
  • Drive is part of a RAID array like RAID0, RAID10, RAID5, or RAID6. A drive from a RAID1 array is the only kind of RAID drive our docking station can potentially recover data from.
  • Whole disk software based encryption such as Microsoft BitLocker / EFS, TrueCrypt, and others.
  • Specialized backup and partition software such as Norton GoBack and some versions of Acronis can cause issues and should be removed/disabled if possible prior to data recovery.

External Hard Drives

Hard drives extracted from external enclosures or drives used in other docking stations will have many of the same potential issues that we just talked about for internal drives but do introduce other new scenarios. A typical scenario is the power adapter or USB port on an external drive has failed. The hard drive inside the failed enclosure is removed and the ‘bare’ drive is inserted into our hard drive docking station to attempt recovery. Or sometimes a drive that was used in another dock is inserted into ours or vice versa.

Here are some common scenarios with for recovering data from an external drive in our dock:

  • All of the above scenarios from our Internal Hard Drives list apply.
  • Whole disk hardware level encryption. This can be in the form of a drive sold intentionally to protect against data theft or unintentionally where what consumers believe are standard hard drives from companies such as Western Digital (the most commonly found in our experience) are written to using a form of proprietary hardware encryption which prevents the drive from being read in any enclosure except for the one the drive shipped with.
  • Sector emulation. See our Understanding Large SATA Drive Compatibility blog post for more details. “Some docks have a non-standard sector emulation feature that enables using capacities above 2TB on Windows XP 32 bit. But this requires that drives initialized and formatted in a special way, and NOT be used with other SATA controllers in desktop PC’s or other drive docking stations, unless those units also have a matching firmware version and support for this feature. Plugable USB SATA docks do not support sector emulation for XP. Rather, we’ve chosen to support 3TB+ Advanced Format drives in the standard way without any emulation.”

Determining if your Drive is Healthy or Failing

One of the first steps is finding out if the drive you are trying to recover data from is in good health. Often a drive appears to be working fine until you try to copy large amounts of data. Sometime common signs of a failing drive are during a transfer a file cannot be read and the data transfer may fail, often with a cryptic error such like “Cannot copy my.file: Data error (cycle redundancy check)”, files could transfer but be corrupted, transfer speed is much slower than expected, and/or the drive drops offline during transfers requiring the dock to be reset.

Usually the first course of action would be to check the S.M.A.R.T. status of the drive. This can indicate signs of failure in a drive like bad sectors or read/write errors. There are several free (or free trial) utilities available for Windows and Mac that can be found online. Here’s what we recommend:

If the drive appears healthy after checking with a SMART utility but is obviously showing signs of irregular behavior, we recommend to download and install the advanced diagnostic utility from your hard drive manufacturer. Unfortunately for Mac users this isn’t an option. Here are some common drive manufacturer diagnostic links for Windows:

Determining the Filesystem of the Drive

A common scenario we run into is a customer will take a hard drive out of another computer or device like a network attached storage (NAS) device and try to recover the data with our dock only to find that the host computer can see the drive but can’t actually read the data on it. For a Windows user this would be apparent when looking in the Device Manager and seeing the drive listed, but the drive not being mounted and accessible from Windows Explorer. A Mac user would similarly check in Disk Utility for the drive if it is not accessible from the Finder.

The first step is to identify where the drive came from prior to being used in our docking station. Was this drive from another Windows computer? Was it from a Mac, or perhaps a Linux computer? How about a NAS device or external hard drive? By knowing this information we can look for information about what type of filesystem is on the drive.

Next you’ll need to find out if your computer can support the filesystem of the drive in question. Here’s a basic list of what filesystems are supported by OS:

  • Windows XP (with proper update installed) and higher can read and write to FAT(16), FAT32, ExFAT and NTFS.
  • Mac OS X 10.6.5 and higher can read and write to FAT(16), FAT32, ExFAT, and HFS+ (Mac OS Extended Journaled or Case-sensitive, Journaled). Mac OS X 10.3 and later can only read but not write to NTFS (write can be enabled, but it is not recommended as it may be unstable).
  • Linux (Ubuntu for example) can read and write to FAT(16), FAT32, ExFAT (with the proper package installed), NTFS, EXT2, EXT3, EXT4, JFS, and XFS. There other filesystems but they are far less common and not available for every Linux distro by default: BtrFS, ReiserFS, UFS (Unix), ZFS.

Knowing what filesystems are supported will help you decide how to proceed. If you’re a Windows user and find the hard drive you need to recover data off of is from a Mac, either you need to install some 3rd party software to read it, or simply recover the data on a Mac system. If you’re a Mac user, you should be able to read data off of a Windows computer drive without issue.

The hardest part is recovering data from a Linux formatted drive on a non-Linux computer. Whether you’re a Mac or Windows user, chances are if you’ve got any kind of NAS device in the home, it will be using a filesystem your computer cannot natively read. In our experience most consumer grade NAS units use EXT2/3/4 filesystems. For Windows users we recommend installing some 3rd party software. For Mac users, take a look at this blog post done by CNET.

If you have any questions at all, please comment below or email We’re happy to help!

]]> 0
Windows 8.1 and the ASMedia USB 3.1 XHCI 1.1 Host Controller Thu, 05 Mar 2015 17:00:24 +0000 Update 3/22/2015:
A new stable version of the ASMedia USB 3.1 driver is now available. ASUS is not yet linking to this version from the motherboard support page so here is a direct download link: Version from 2/12/2015.



When we heard ASUS had released the first commercially available USB 3.1 equipped motherboard we rushed to get our hands on one. We picked up the ASUS Z97-PRO(Wi-Fi ac)/USB 3.1 and assembled our new USB testing workstation. Since Intel is not yet ready with their USB 3.1 controller, ASUS added an ASMedia ASM1142 USB 3.1 XHCI 1.1 controller with two teal colored USB 3.1 type A connectors.

Click to enlarge

Click to enlarge

As most of our products are USB based, we wanted to get some early testing results to be prepared for any issues that might arise with this new USB 3.1 controller. We installed a fresh copy of Windows 8.1 with all of the latest updates and found that the built-in Microsoft XHCI “0110” driver (version 6.3.9600.17393 from 10/6/2014) for the ASMedia controller only appeared to be USB 3.0 capable according to the device description.


Normally we do not recommend replacing the built-in Windows 8/8.1 Microsoft USB driver stack with 3rd party drivers but in this scenario to achieve full USB 3.1 functionality we tracked down the latest 3rd party driver installation utility from ASMedia (version from 12/24/2014) provided by ASUS in their motherboard support downloads section and installed it. Now the controller was being recognized as an XHCI 1.1 controller capable of USB 3.1!


Unfortunately we experienced major issues almost immediately. Upon connecting our USB3-HUB7-81X (VIA VL812 B2 based) 7-port hub, the system instantaneously crashed with a “SYSTEM_SERVICE_EXCEPTION (asmtxhci.sys)” blue screen of death. This was not a good sign as the driver in question that caused the crash was the ASMedia driver (asmtxhci.sys) we had just installed.


After several reboots and experimentation, we found the crashes to vary widely in frequency. Sometimes the crash would occur with a simple USB 3.0 flash drive, other times with our 7-port hub and all 7 ports occupied with USB graphics adapters. When connecting these same devices to the on-board Intel 9 Series USB 3.0 controller there were no issues. We decided to remove the ASMedia drivers and roll back to the built-in Microsoft “0110” drivers to see what would happen. We found that the controller became as stable as the Intel controller but with the tradeoff of losing USB 3.1 functionality for future USB 3.1 devices. This was not a compromise we were willing to make.

We did some digging and found that the “asmtxhci.inf” driver file from the ASMedia driver installer not only worked for the new ASM1142 USB 3.1 controller, but it appeared to be a unified driver covering all ASMedia USB host controllers. This gave us an idea. We installed an older ASMedia ASM1042 USB 3.0 PCI-E controller card in our USB test workstation and installed the ASMedia drivers once again. Both 3.1 and 3.0 ASMedia controllers were now running the same driver on the same workstation.

Click to enlarge

Click to enlarge

We found that the instability we encountered on the on-board ASM1142 3.1 controller also happened on our add-on PCI-E ASM1042 3.0 controller. Knowing that the instability was definitely driver and not hardware related, we looked at the “asmtxhci.inf” driver file for the older (stable) ASMedia drivers that we have been recommending to our customers for their ASMedia USB 3.0 controllers (version from 4/10/2014, WHQL certified) and found that it was also compatible with our new 3.1 controller despite being the oldest of the three compatible drivers.

Once again we removed the drivers, reverted to the Microsoft built-in drivers for both ASMedia controllers (“0110” for the ASM1142 and “0096” for the ASM1042), and finally installed the older ASMedia drivers. After doing so both controllers were stable in our tests and the ASM1142 USB 3.1 controller was being recognized as 3.1 (XHCI 1.1) capable.

Click to enlarge

Click to enlarge

At the moment Plugable is still testing the ASMedia ASM1142 controller but the early results are looking great after finding a stable driver. We’ve currently got 6 of our USB3-HUB7-81X 7-port USB 3.0 hubs and 36 of our UGA-3000 USB 3.0 DisplayLink graphics adapters attached with no signs of instability or USB resource limits. (Please note that Windows and DisplayLink does have limits to how many monitors can be attached. Our demonstration here is unlikely to initialize all 36 adapters with monitors attached successfully. Realistically we have been able to get up to 14 with success in the past.)

Click to enlarge

Click to enlarge

For the time being, we strongly recommend users do not install the latest driver version from ASMedia but rather install the older stable drivers (or use the built-in Microsoft “0110” drivers if 3.1 operation is not required). Unfortunately as we’re still waiting for some native USB 3.1 devices to test, we cannot comment on actual USB 3.1 device functionality on any of the aforementioned driver versions.

Due to the difficulty in finding and downloading the stable ASMedia driver version we have provided it below for your convenience. Feel free to also comment below if problems remain but Plugable cannot take any responsibility for any issues these drivers may cause to your computer.

ASMedia ASM1042 USB 3.0 XHCI / ASM1142 USB 3.1 XHCI 1.1 Driver Version WHQL – 4/10/2014

]]> 6
Plugable Car Cup USB charger Giveaway (Uber drivers only) Fri, 27 Feb 2015 01:52:33 +0000 #giveaway for #uber drivers. RT + follow for chance to win one of 10 Plugable car USB chargers


  • No purchase necessary
  • Drivers in the 50 US states only on this giveaway (sorry! We’ll get to drivers in other geographies in future giveaways!)
  • Winners selected from followers who retweet, who have some history in their twitter feed of activity as an Uber driver
  • Winners selected from retweets in by Saturday, Feb 28th
  • Winners will be contacted via Twitter DM (Twitter only allows this communication for users who follow @plugable)

For more information on how the Plugable Car Cup USB charger can delight your riders by keeping them fully charged, visit

]]> 0
More Windows Tablets Can Transform into Full Desktops in 2015 Thu, 12 Feb 2015 01:06:16 +0000 Docking-Station-hero-image-v2_3840x1080
This year, even more Windows tablets can be transformed into full desktop workstations using the Plugable UD-Pro8 docking station, introduced for the Dell Venue 8 Pro tablet in a successful Kickstarter campaign last summer. To help you decide whether turning a one of these tablets into an ultra-portable desktop solution will work for you, here is a list of compatible devices and some videos about the UD-Pro8.


Currently, the following tablets have been confirmed by us or our customer base to be compatible with the UD-PRO8:

  • ASUS VivoTab Smart 10.1 ME400C
  • Dell Venue 8 Pro 3000 (Z3735G CPU)
  • Dell Venue 8 Pro 5000 (5830, Z3740D CPU)
  • Dell Venue 8 Pro 5000 (5830, Z3745D CPU)
  • HP Stream 7 5701
  • HP Stream 8 5901
  • Lenovo IdeaTab Miix 2 8″
  • Nextbook 8 (Win 8.1)
  • Toshiba Encore Mini

For full details see our compatibility chart here. If your tablet is not on the chart or is marked “untested,” and you would like to test it with the Pro8, you can apply for a review unit at For FAQs about the Pro8 click here.


You can see Pro8 customer reviews on Amazon here. Below we’ve embedded several YouTube reviews and unboxing videos of our Pro8:

If you have any questions at all, please comment below or email We’re happy to help!

Where to Buy

]]> 0
Charging on the Go Gets Easier: The New Plugable Power 2015 USB-C2W 2-Port USB Smart Charger Tue, 10 Feb 2015 19:07:58 +0000 Earlier this year Plugable introduced our popular USB-C5TX 5-Port 40W USB smart charger. However, 5 ports and a cable to the wall can be overkill when traveling. For your bag or purse, smaller and lighter is better. That’s where our new Plugable USB-C2W 2-Port travel charger shines. With its small size, two USB smart charge ports, and flip-up wall plug, it’s great for charging at the coffee shop or hotel room. And with the C2W’s smart IC built into each port, devices like iPhone 6 and 6 Plus charge up to twice as fast with the C2W than with their bundled power adapters.

Half the size of the C5TX, the new USB-C2W has a flip-out AC wall plug built in (for US, Canada, Japan), which means one less pesky power cable to haul around. It’s so small you can stick it in your pocket, making it the perfect charging solution when you’re on the move.

Tired of juggling a multitude of USB chargers for all the electronic devices you carry? The C2W USB smart charger can be the perfect replacement for the whole lot. It charges most cell phones, tablets, USB battery packs, handheld game consoles, e-readers, e-cigarettes, cameras, smart watches, fitness trackers, bicycle lights, and many more. You only need one outlet.

How does smart charging work? USB charging isn’t perfectly standardized. Different devices have different special handshakes to detect “their charger”. With the C2W, when you plug in your device, a chip built into the charger identifies the device connecting if possible, then selects the best charging mechanism for the fastest possible charge. Some devices like iPhone 6 and 6 Plus will choose to charge faster with the C2W than with their bundled power adapters. Devices choose how much and how fast to charge, but each USB port of the USB-C2W can supply up to 2.4A of charging current, sharing an impressive 4A, 20W total between both ports. That is enough power to an iPad and an iPhone simultaneously.

If you have any questions at all, please comment below or email We’re happy to help!

Where to Buy

]]> 0
Plugable-BTAPS Python Library for Creating Custom Applications with the Plugable PS-BTAPS1 Thu, 05 Feb 2015 22:05:51 +0000 Plugable BTAPS Command Line Application

We are excited to announce the release of our open-source library for interacting with our Plugable PS-BTAPS1 Bluetooth Power Switch. This library is fully compatible with Windows and Linux systems running Python 2.7 and the pyBluez library. We hope that this library will help the open-source and maker community create interesting new projects and applications with our Programmable Bluetooth Power Switch.

All of the code and documentation for the library is hosted on our Plugable BTAPS Github Repository. Some examples of how to use the library can be found in our Github wiki. The library is MIT Licensed, so feel free to use it directly, or as a reference for implementing BTAPS functionality in any of your projects.

This library exposes most of the features present in our Android and iOS apps including:

  • Setting Switch On/Off
  • Reading current status of switch (name, on/off state, timer settings)
  • Creating, modifying, and deleting timers
  • Changing the device’s name
  • Updating the device’s date and time to your PC’s current date and time

Bundled with the library is also a simple command-line interface for interacting with a single Plugable PS-BTAPS1. It’s an interactive program, and only requires the Bluetooth device address of a PS-BTAPS1 to be used.

# Replace 00:00:00:00:00:00 with your device's Bluetooth Device address
btaps 00:00:00:00:00:00

The library and CLI application can be easily installed using pip.

pip install plugable-btaps

Some small features are still missing, but they will be implemented as time allows. This is the first release of the library, and there are certainly areas to improve on. We welcome code contributions and bug reports in our Github repository.

]]> 5
Plugable Launches DisplayPort and Mini DisplayPort to HDMI Active Adapters Thu, 05 Feb 2015 15:59:00 +0000 Multimedia background. 4k resolution conceptAs a market leader in multi-monitor docking stations and graphics adapters, our offices at Plugable are by necessity full of all sorts of various displays, PCs, and tablets. Over the past year, we’ve seen a substantial increase in the number of systems and displays which include the VESA DisplayPort (DP) and Mini DisplayPort (mDP)/Thunderbolt outputs. As much as we’re excited about the growing popularity of DisplayPort, connecting our DisplayPort computers to our high-resolution HDMI monitors using many commercially available adapters was often a frustrating experience. Existing adapters on the market vary wildly in features and quality – most wouldn’t allow us to use resolutions above 1920×1080, and would even sometimes lose sync and result in a blank display.

We set out to fix that.

And with that, Plugable is proud to announce the launch of our DisplayPort and Mini DisplayPort to HDMI active adapters, both of which have passed the rigorous testing necessary for VESA (DisplayPort) certification. Plugable’s DP and mDP active adapters allow the versatility to connect your DisplayPort-enabled PC or tablet to virtually any HDMI-equiped display, including monitors and HDTVs with Ultra-HD 4K resolutions. Our adapters support an internal clock rate of up to 300MHz which allows for all the “must have” features of HDMI 1.4: resolutions up to 4K@30Hz, 1080P@120Hz, steroscopic 3D support, and Deep Color depths. The adapters are also capable of transmitting 8-channel LPCM/High Bit Rate (HBR) audio, up to 192kHz sample rate. Additionally, both adapters support AMD Eyefinity technology, allowing you to connect a 3rd or 4th display on supported AMD graphics cards.

Devised as a royalty-free alternative to HDMI, DisplayPort is no longer just a niche connector found on high-end video cards. Microsoft’s Surface Pro systems, Intel’s NUC series, Apple Macs, and many others are now coming standard with this type of connector included. DisplayPort monitors are becoming more widely available as well, though most entry level monitors still exclude DisplayPort connections as a cost-saving measure.

The Plugable DP-HDMI adapter is a great choice for those using our UGA-4KDP USB 3.0 DisplayPort Graphics Adapter with a high resolution QHD or UHD HDMI display. The mDP-HDMI adapter is a good fit for those with newer laptops, ultrabooks, and tablets which offer the smaller Mini DisplayPort or Thunderbolt connection.

For additional technical and compatibility information including the exciting details regarding “active” vs. “passive” adapters, check out our DP-HDMI and mDP-HDMI product page.

Where to Buy

]]> 0
Nexus 7 First Gen (2012) Upgrade to Andriod 5.0 (Lollipop) Breaks USB-Ethernet Support Tue, 03 Feb 2015 20:28:23 +0000 Update: The latest Android version update for the Nexus 7 first gen (2012), version 5.1.1, has resolved this issue. If this version is not yet available over-the-air (OTA) for your tablet, you can download and install it manually from Google’s factory image website here. Use the “nakasi” image for regular WiFi devices, and “nagasig” for devices that can use cellular networks.

The recent over-the-air upgrade to Android 5.0 (Lollipop) on the Nexus 7 first gen (2012) tablet appears to have broken support for USB-Ethernet devices including Plugable’s USB2-E100 and USB2-OTGE100 USB to Fast Ethernet adapters. This is ironic, because the same upgrade on the Nexus 7 second gen (2013) has finally fixed support for these devices.

At Plugable, we are looking for a possible work-around, but even if one is found, it will likely require root access to the device. In the meantime, if you are using either adapter with a Nexus 7 first gen (2012) device and have not yet upgraded to Lollipop, we recommend staying with Android version 4.4 (KitKat) until this issue is resolved. Further updates to this issue will be reflected in this blog post.

If you have questions or useful information about this, please comment here or email us at

]]> 2
Pi and Coffee: Automate Your Morning with Plugable Bluetooth Switches and a Raspberry Pi Thu, 29 Jan 2015 15:00:48 +0000 Pi and Coffee
It’s 6 am. Time to get up. Your favorite song starts playing. A dim light comes on next to the bed. A few minutes later a bright light jars you from sleep. Meanwhile the smell of fresh-brewed coffee wafts in from the kitchen. Welcome to another automated morning with your Raspberry Pi and some Plugable PS-BTAPS1 Bluetooth Switches.

All you need to send your morning into the future are one Raspberry Pi, one Plugable USB-BT4LE Bluetooth adapter, and a Plugable PS-BTAPS1 Bluetooth switch for each lamp, coffee pot, or other morning essential you want to control.

There are four basic steps to get this going: 1. Get the Pi Ready, 2. Set Up Bluetooth, 3. Communicate with Your Switches, and 4. Set Up a Schedule. I assume you already have your Pi up and running. The instructions in this guide assume you are using the Raspian distribution, available here. Raspbian is also installed by default if you use the NOOBS installer.

Get the Pi ready
Connect your Pi to the internet, then make sure everything is up-to-date by running the following commands:

sudo apt-get update
sudo apt-get upgrade

Answer “Yes” to any prompts.

Next install the latest Bluetooth drivers by running the following at the command prompt:

sudo apt-get install blueman

The script we will use to control the switches is written in Python and uses the PyBluez module. Both can be installed with this command:

sudo apt-get install python-bluez

Plug your Bluetooth adapter into the Pi and you are ready to go!

Set Up Bluetooth

Now, let’s make sure Bluetooth on the Pi is working and can communicate with the BTAPS1.

Plug the BTAPS1 into a convenient electric outlet. Make sure Bluetooth adapter is plugged into a USB port on your Pi and issue the following command:

hcitool scan

You should see a list of Bluetooth devices in the area. One of the lines should end with “Plugable.” That is your BTAPS1.


If you don’t see your switch in the list, make sure everything was set up correctly. If it still isn’t responding, contact for help.

Write down the number in the first column. It’s the unique address for this switch, called a bdaddr. You’ll need it later. Mark the BTAPS1 you used so that you can distinguish it from the others you have (I write the bdaddr on a piece of masking tape attached to the switch). Plug in your other BTAPS1 switches one-by-one, run the command, write down each bdaddr, and mark each switch.

Communicate with Your Switches

Now that we know the Pi can see each switch through Bluetooth, let’s set up the script that controls them.

Ivan Fossa Ferrari, one of Plugable’s resident software geniuses, has written a script which allows you to switch the BTAPS1 off and on by sending a short command from your Pi. You can see the script and learn how it works here.

We will download that script and save it as a file that can be executed by Python on your Pi.

Open the Epiphany browser on your Pi and open this same blog post there. Right-click the following link:

Download Script

Select “Save Link As…” and in the window that appears, change the name of the file to

Click Save. The file should be saved in your home directory.

To test the script, plug one of your BTAPS1 switches into an outlet, then plug a lamp into it. Switch the lamp to the ON position. It shouldn’t turn on because the BTAPS1 should still in OFF status.

In terminal window, type this:

python ~/ 00:00:00:00:00:00 on

Replace 00:00:00:00:00:00 with the bdaddr for the switch you are using. The light should turn on!

If it doesn’t work, make sure the lamp is switched on, and that the address was typed correctly. If that isn’t the problem, make sure the BTAPS1 is still accessible to your Bluetooth adapter by running hcitool scan command again.

Now issue this command:

python ~/ 00:00:00:00:00:00 off

Again, replace 00:00:00:00:00:00 with the bdaddr for the switch you are using. The light should turn off.

Did it work? Great! Try the same thing with the other switches you have and make sure they respond to the command also. Remember to change the bdaddr in the command each time you test a different switch. If you forget the bdaddr for a switch, you can always find it out with the hcitool scan command.

Set Up a Schedule

Now it’s time to get creative. How do you want to wake up? Loud blast of music? Gentle lights? Toast and coffee? Let’s set it up. We’ll use the cron function in the Pi that allows us to schedule events in many different ways.

First, let’s make sure your Pi wakes you up in the morning of your time zone and not somewhere else. At the command line, type:


You Pi will display its concept of the time and date. Is it correct? Check the three letters to the right of the time. Do they look like your time zone? If they say UTC, your Pi still thinks it is in the UK, and has to be educated. Type this command:

sudo raspi-config

Select item 4 “Internationalization Options.” Go to item I2 “Change Timezone.” Wait a moment for the next screen to appear. Select your geographic area, then on the next screen, select your city or location. Press the tab key until you are on “OK”, then press Enter.

Type the date command again. Make sure it shows your correct time zone. If the time or date is wrong, correct it by typing a command in this format, using the current time and date in your time zone:

sudo date -s HH:MM:SS

Check it again.

Now let’s set up a cron job to turn on that light. Type the following at the command prompt. You don’t want to use sudo because this job will be personal to your login:

crontab -e

The default nano editor will open the the crontab file, which is used to schedule events to happen at specified times. You can use it to issue commands that turn your switches on and off whenever you want, and have it repeat in many different ways. If you have never used crontab before on this Pi, it will open a file that explains the format for scheduling new cron jobs. I’ll explain it here too.

A single cron job issues a single command on a schedule you set. For example, you can turn on a light every day at 7 am or turn your coffee pot off at 8 am. It consists of a single line added to your crontab file. You add that line to the crontab file with the crontab -e command. Never open the file and edit it directly.

The line has five numbers that determine when it takes effect. When the appointed time comes, the cron program issues the command that follows the numbers in the line. It looks like this:


M refers to the minute you want the command issued. For example if you want the light to come on at 7:15 am, you would put 15 here.
H refers to the hour, using the 24-hour clock. If you wanted the popcorn popper to make you popcorn at 4:00 pm, you would put 16 here.
DOM refers to the day of the month. If you wanted your light to come on only on the third of the month, you would put 3 here.
MON refers to the month. If you only wanted your command to be issued in November only, you would put 11 here.
DOW refers to the day of the week, with Sunday being either 0 or 7 and all the other days numbered in order. If you wanted your coffee only on Monday, you would put 1 here.
COMMAND refers to any command that can be executed by your Pi.

For any of the numbers, putting * will make it happen every minute, every hour, every month, or every day of the week, respectively.

For example, if you add this line, it will turn your switch on at 5:30 am every Wednesday:

30 05 * * 3 python ~/ 00:00:00:00:00:00 on

You can turn it off at 7:00 am every Wednesday like this:

00 07 * * 3 python ~/ 00:00:00:00:00:00 off

You can have it come on when you arrive home from work at 8:00 pm (you work hard!):

00 20 * * 3 python ~/ 00:00:00:00:00:00 on

This command would operate every day at 9:20 pm:

20 21 * * * python ~/ 00:00:00:00:00:00 on

It’s a good idea to practice this by making a sample lines that activate in a few minutes. For example, if the current time was 7:30 pm, a good test would be:

32 * * * * python ~/ 00:00:00:00:00:00 on
34 * * * * python ~/ 00:00:00:00:00:00 off

Save the file with control-o and control-x, and wait to see if the action happens. The light should turn on at 7:32 pm and turn off at 7:34.

You can wake up to your favorite song at 7 am everyday with the following cron job. Just connect your speakers to the Pi, and make sure your favorite song is in your home directory:

00 07 * * * omxplayer ~/yourfavoritesong.mp3

Once you feel you’ve mastered cron jobs, make a schedule for your morning. Create a line for each switching action for each BTAPS1 switch, then save the file as before with control-o and control-x. You can comment using a hash mark (#).

On every Tuesday and Thursday, the script below would fire up the coffee maker at 6:50 am, turn on your bedside lamp and play your song at 7 am, then turn on the bright lights at 7:10 am. It would then turn off everything when you leave for work at 8 am.

50 06 * * 2 python ~/ 00:00:00:00:00:00 on #Coffee maker ON 6:50 am TUE
00 07 * * 2 python ~/ 00:00:00:00:00:00 on #Bedside lamp ON 7 am TUE
00 07 * * 2 omxplayer ~/yourfavoritesong.mp3 #Song plays 7 am TUE
10 07 * * 2 python ~/ 00:00:00:00:00:00 on #Very bright lamp ON 7:10 am TUE
00 08 * * 2 python ~/ 00:00:00:00:00:00 off #Coffee Pot OFF 8 am TUE
00 08 * * 2 python ~/ 00:00:00:00:00:00 off #Bedside lamp OFF 8 am TUE
00 08 * * 2 python ~/ 00:00:00:00:00:00 off #Very bright lamp OFF 8 am TUE
50 06 * * 4 python ~/ 00:00:00:00:00:00 on #Coffee Pot ON 6:50 am THU
00 07 * * 4 python ~/ 00:00:00:00:00:00 on #Bedside lamp ON 7 am THU
00 07 * * 4 omxplayer ~/yourfavoritesong.mp3 #Song plays 7 am THU
10 07 * * 4 python ~/ 00:00:00:00:00:00 on #Very bright lamp ON 7:10 am THU
00 08 * * 4 python ~/ 00:00:00:00:00:00 off #Coffee Pot OFF 8 am THU
00 08 * * 4 python ~/ 00:00:00:00:00:00 off #Bedside lamp OFF 8 am THU
00 08 * * 4 python ~/ 00:00:00:00:00:00 off #Very bright lamp OFF 8 am THU

So fill up that coffee maker, plug in the BTAPS1, and hit the sack. Maybe the future hasn’t brought you a robot maid or personal helicopter yet, but your automated morning wake-up is already here with Pi and coffee!

If you have any questions at all, please comment below or email We’re happy to help!

]]> 2
The Mystery of the Windows Static IP That Won’t Stick Tue, 27 Jan 2015 15:00:50 +0000 One of the interesting things about helping customers at Plugable is that we not only see a wide variety of creative uses of our products, but sometimes we also get to find the root of operating system problems.

Now I know that may sound strange. I have always been ‘that guy’ who wants to get at the true root of a problem if possible. I have spent more hours than I care to admit tracking down seemingly minor glitches in the hopes of never having to be bothered by them again.

That opportunity presented itself again recently and I thought I would share the results as they may be useful to everyone.

Some of the customers who had purchased one of our USB docking stations mentioned they were having trouble setting a static IP address in Windows. They would make the change and although everything appeared to work properly at first, the change would not stick.

That shouldn’t happen with our docks. While a driver for the Ethernet adapter does get loaded, there is nothing special about the driver that would preclude setting a static IP.

I grabbed a random test laptop and was able to duplicate this behavior. I would make the change to the network adapter in the Network Connections area of the Network and Sharing center in Windows 8.1 Pro. Although everything seemed to work fine, the change did not stick. If I went back in the settings for the Ethernet adapter, it would still be set to DHCP.

I removed the dock from the test computer and using the laptop’s built-in Ethernet adapter, I got the same results. Nothing seemed relevant in the Windows logs and no error messages were displayed.

When searching for other reports of a problem like this, the challenge is that the search terms are very general and a lot of other results pop up. However, I did find two links that finally helped me zero in on the solution:

The Microsoft Knowledge base article referred to Windows 2000! Some steps mentioned no longer applied to Windows 8.1, but the general description seemed to fit what I was seeing.

So I decided to be daring. I made a backup of the registry (by exporting it) before making any changes and then navigated to the registry key located at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Network. There I deleted the binary value called Config and restarted. This allowed me to set a static IP address and have the setting maintained as it should be.

I tested this on another machine that had exhibited the behavior and it worked there as well. Feeling confident, I emailed the fix to one of our customers who had run into the issue (with the caveat to backup the registry) and it resolved it for him as well.

Not being content with just finding the fix, I Googled the registry key to see what results I would get. It’s something I like to do to see what comes up when I search for an answer I already know. That led me to only one other result from Microsoft here:

This blog post touts a similar fix to solve yet another range of maladies, but it doesn’t actually delve into the details of what this value records. Further searching led me to a book called Windows 2000 Server 24seven by Matthew Strebe (ISBN 978-0782126693). On Page 575 there is a reference to the Network key in general, saying it “Contains keys that create the bindings between network adapters, clients, services and transport protocols”.

There are probably more references out there that may explain what is being stored and more importantly why it can become corrupt and cause so many problems (I’ll keep digging in the hopes of find the true root), but meanwhile I hope this relatively simple fix will help people experiencing similar problems setting static IPs.

]]> 0
New Plugable USB Fast Ethernet Adapter Brings Wired Ethernet Directly to Your Tablet’s MicroUSB Port Thu, 22 Jan 2015 15:48:58 +0000 You are in a hotel room in a far-away city and you need to access the internet through your tablet. But there’s no WIFI! Only a hard-wired Ethernet connection. What do you do?

2_512In the past, with many popular tablets, you could reach for your trusty Plugable USB2-E100 Fast Ethernet adapter and get connected, but you needed a special OTG (On-The-Go) cable between its standard USB connector and the MicroUSB port on your tablet. It worked great, but the OTG cable was small and easy to lose. It often ended up as the weak link in your network access.

Now Plugable is proud to introduce the USB2-OTGE100, a new USB Fast Ethernet adapter that completely eliminates the need for an OTG cable. Instead of the standard USB connector of the E100, it features a male MicroUSB connector that plugs directly into the MircoUSB port on your portable device. You can ditch the OTG cable and use a wired LAN connection with ease.

Like the E100, the new USB2-OTGE100 features a compact design that fits easily in your pocket or bag. On one end there is a RJ45 connector that fits any standard Ethernet cable. On the other end is a wire that terminates in a male MicroUSB B connector that fits into the MicroUSB port found on most Windows and Android tablets and Android smartphones. It’s powered through the USB port, so no batteries or AC adapter are needed.

Although the OTGE100 is a lifesaver when all you have is a wired internet connection, using it to connect directly to a network has other benefits even when WIFI is available. Wired connections offer increased reliability and faster speed than WIFI (802.11)–especially when the signal is weak or far away, or when many people are using the network.

The OTGE100 uses the same ASIX AX88772 chipset as the USB2-E100. It is supported out of the box by nearly all tablets that feature Windows 8.1, including the Venue 8 Pro, Acer Iconia tablets, the HP Stream 7, Lenovo Miix 2, and the Nextbook 8. It is also supported by many tablets with Android versions 4.0 and later, including Nexus 7 first generation out of the box and second generation with Android 5.0.1 (Lollipop) or later. Many ASUS tablets are also supported. Unfortunately, Kindles and most Samsung tablets aren’t supported. The OTGE100 also works with many Android smartphones, including the Nexus 5 and the Moto X. Check the product page for the OTGE100 for a list of devices that we’ve tested with it.

Plugable’s new USB2-OTGE100 Fast Ethernet Adapter is a great way to get connected. Take it with you and leave that OTG cable at home!

If you have any questions at all, please comment below or email We’re happy to help!

Where to Buy

]]> 2
Plugable Launches 7-Port SuperSpeed USB 3.0 Hub with 60W of Power and Best Available Charge and Sync Support Thu, 15 Jan 2015 17:16:05 +0000 2_512

Last year we introduced the USB2-HUB7BC, our flagship 7-port USB 2.0, 60 watt powered hub with BC1.2 charging support. This hub was our “holy grail” of charging hubs due to the great compatibility that the BC1.2 charging standard offered. Nearly any iOS device with a Lightning connector and many other Android devices could charge and sync through that hub at up to 1.5A per port. The only drawback for some was that the hub wasn’t USB 3.0 and proprietary charging signals weren’t supported when the hub was being used as a stand-alone charger with no host computer connected. New for 2015 we now have a USB 3.0 version with all of the same great features it’s USB 2.0 predecessor has and more.

We are excited to introduce our new Plugable USB 3.0 7 Port Hub and BC 1.2 Fast Charger. This is a USB 3.0 hub for connecting a PC, Mac, Linux, or other computer to add up to 7 additional USB devices. It also provides new, advanced compatibility for devices which support the new BC 1.2 charging standard: charging and syncing (“Charging Downstream Port” or CDP mode) and charging with only the hub’s power adapter and no computer (“Dedicated Charging Port” or DCP). Unlike the USB 2.0 version, this new USB 3.0 hub also has support for other proprietary charging signals when being used as a stand-alone charger. The USB3-HUB7BC uses a robust 60W UL-certified power adapter (12v 5A output, 12A at USB’s 5V) to power and charge the most demanding devices.

Note that not all phones or tablets support the new BC 1.2 standard or all parts of it. Specific supported devices include all Apple iPads and iPhones with Lightning connectors, along with many newer Android devices including Amazon’s Kindle Fire line of tablets. Older Apples (30-pin) and other phones and tablets do not support the BC 1.2 standard at all, so they won’t charge. Other devices may charge but not sync. All of these behaviors are determined by the phone or tablet, and so will vary by device.

If you have any questions, we’re here to help. Just ask below or email anytime. Thanks for going out of your way for Plugable products!

]]> 2
Our Best USB Charger Just Got Better, The New Plugable Power 2015 USB-C5TX 5-Port USB Smart Charger Tue, 13 Jan 2015 17:37:53 +0000 Last year we released our USB-C5T 5 port smart charger and it’s been a great success, but we set out to make it even better. New for 2015 is the Plugable USB-C5TX.

The C5TX, unlike its predecessor, has smart charging on all 5 of its USB ports and an integrated (and more powerful) power supply to charge the most demanding devices. It was important to us to eliminate the need to decide which port to connect a device to by making all 5 ports equally compatible. We also wanted to make the charger as compact and portable as possible, so we ditched the external power brick and installed a built-in power supply that uses the same standard detachable AC power cable (IEC 60320 C7, included) as many common consumer electronics devices. The power supply accepts 100-240V at 50/60Hz, so it can be used in any country around the world (with appropriate AC outlet adapter/cable).

With so many devices from daily life capable of charging from USB, why waste valuable AC outlets by using a separate AC adapter for each one? The C5TX is the perfect all-in-one universal USB smart mutli-charger for charging most cell phones, tablets, USB battery packs, handheld game consoles, e-readers, e-cigarettes, cameras, smart watches, fitness trackers, bicycle lights, and more.

To maximize device compatibility, each USB port is equipped with its own individual smart-charging chipset that tries to identify the device connected and choose the best charging mechanism to provide the fastest charge possible. Some devices, like the iPhone 6 and 6 Plus, charge even faster with the C5TX than with their bundled power adapters. Each USB port can charge a device at up to 2.4A, with an impressive 8A, 40W total to share between all 5 ports. That is more than enough power to charge 3 iPads and 2 iPhones at the same time.

If you have any questions at all, please comment below or email We’re happy to help!

Where to Buy

]]> 1
Upgrade the USB Charging Capabilities of Your Car Mon, 12 Jan 2015 15:58:48 +0000 Your car may have enough cup holders but USB charging ports are often scarce. With our new Plugable USB-C3C, we’ve packed 3 USB smart charging ports into a beautiful cup-sized charger that sits in any cup holder and gets power from your existing 12V accessory socket / cigarette lighter.

With so many devices in your daily life capable of charging from USB, the need to recharge them on the go has become more important than ever. The USB-C3C is an all-in-one universal USB smart mutli-charger designed specifically for your car. It’s able to charge most cell phones, tablets, USB battery packs, handheld game consoles, e-readers, e-cigarettes, cameras, smart watches, fitness trackers, bicycle lights, and more all while sitting neatly and securely tucked away in your vehicle’s cup holder.

To maximize device compatibility, each USB port is equipped with its own individual smart charging chipset that tries to identify the device connected and choose the best charging mechanism to provide the fastest charge possible. For some devices like the iPhone 6 and 6 Plus, they will charge faster connected to the C3C than with their bundled power adapters. Each USB port can charge a device at up to 2.4A with an impressive 7.2A, 36W total to share between all 3 ports. More than enough power to charge 3 power hungry tablets, such as iPads, at the same time.

The Plugable USB-C3C is a great upgrade especially for cases where:

  • The car has only 1 or 2 USB charge ports.
  • The car’s USB charge ports don’t have a smart charging chipset so that only some devices charge; or, like the iPhone, charge at a slower 1A rate when as much as 2.4A is possible. This is extremely common as most built-in USB charging ports are device specific or limited to 1A.

If you have any questions at all, please comment below or email We’re happy to help!

Where to Buy

]]> 2
Developing Custom Applications with our Plugable PS-BTAPS1 Bluetooth Switch Fri, 19 Dec 2014 23:31:57 +0000 BTAPS1 Sample App Running on Fedora 21
Our new Plugable PS-BTAPS1 is perfect for many different home automation projects, however, we understand that our Android and iOS apps may not fit the bill for every project, and we wanted to provide a way for hobbyists and programmers to develop custom applications that are able to interact with the device. We have for you today some instructions and sample code on how to trigger the device’s on and off functionality from a custom application. More information on how to perform more advanced functions, such as using the built-in timer, will be forthcoming at a later date.

The PS-BTAPS1 uses Bluetooth’s very simple Serial Port Profile for communication. All that is required to talk to the device is an open RFCOMM connection and then the right payload to be sent to trigger the device’s desired functionality.

There are two different payloads necessary for turning the device’s switch on and off:
0xCCAA03010101 will tell the BT-APS1 to flick its internal switch to ON.
0xCCAA03010100 will tell it to do the opposite, and switch to OFF.

To make this somewhat simpler to understand, we are providing a sample application written in Python 2.7. The code for the application can be found below:

All that is required to run the application is Python 2.7 and the PyBluez module, which can be downloaded here.

This sample application has a simple command line interface for making your PS-BTAPS1 switch on or off.

python 00:00:00:00:00:00 on
python 00:00:00:00:00:00 off

Replace 00:00:00:00:00:00 with your PS-BTAPS1’s Bluetooth Address. You can easily find your device’s Bluetooth Address like so:

Go to Control Panel -> Devices and Printers -> Right Click on your paired Plugable PS-BTAPS1(It should be listed as “Plugable”) -> Properties -> Bluetooth Tab -> Unique Identifier

Open a terminal and run:

hcitool scan

This code has been tested under Fedora 21 and Windows 8.1. It should run on older versions of Windows and Linux without issues. Unfortunately, this sample code will not work under OS X due to lack of support in PyBluez. Mac OS X applications should still be possible using OS X native Bluetooth APIs.

More information on programming the Plugable PS-BTAPS1 should follow early in the new year. We can’t wait to see what cool and interesting projects are developed by the community!

]]> 3
Automate Any Appliance with the New Plugable PS-BTAPS1 Bluetooth Controlled Switch Fri, 19 Dec 2014 15:07:40 +0000 mainDid you grow up watching the Jetsons every morning? I did. As an adult, I’m still waiting for someone to invent a bed that involuntarily kicks me out, showers me, dresses me, and sticks a warm cup of brew in my hand. I probably shouldn’t hold my breath…

While my childhood-inspired wishes may be impossible, home automation is slowly becoming more common. With the new Plugable PS-BTAPS1 Home Automation Switch we aim to bring this capability to your house as simply and inexpensively as possible. When paired with any iOS or Android smartphone, this switch uses our companion application on the phone to control power to any device that plugs into a standard AC wall outlet. You can have that device turn on when your smartphone gets near it, reacting to the presence of your mobile device’s Bluetooth radio. Or set a timer schedule using the phone’s UI. The switch will execute that schedule indefinitely even when you’re not around. Or you can simply use your phone as an instant remote control.

IMG_20141218_174112To set up, plug everything in and then from your phone’s UI, connect with the switch via Bluetooth using your phone’s built-in Bluetooth control (look for the device name “Plugable”). Then install the free “Plugable Power” application from the iOS or Android App Store and open it to see and control the device you’ve already paired with.

The switch is controllable by Bluetooth – so when you configure the switch, you have to be close enough to make a Bluetooth connection. Once set to Timer Mode and given a schedule, it will turn appliances on and off to the schedule you’ve set even when you’re not around.

And if you’re a tinkerer or programmer, we have information and open source available to control the switch directly.

We’re excited about the potential of bringing existing lights and appliances to life via Bluetooth. If you have any questions at all, feel free to comment below or email us at Thanks for going out of your way for Plugable products!

Where to Buy

]]> 3
20 Years of Linux Wed, 17 Dec 2014 22:19:24 +0000 Originally Published in Linux Journal Issue #239, March 2014

20 Years ago, 15 years before founding Plugable Technologies, Bernie Thompson wrote an article for the first issue of Linux Journal. Over the next twenty years, LJ published 4 more articles including this 20-year retrospective in March 2014. Thanks to Linux Journal’s generous terms allowing authors to keep their copyright, we’re able to republish the full series of articles here. If you have your own memories and opinions of the time, please share in the comments. This is Article 5 of 5.

cover23920 years ago in the world of operating systems, IBM was a dying king and Microsoft an ascendant prince. Apple was in exile. And a young wizard named Linus Torvalds rebuilt venerable Unix, which already had 20 years of its own history, and re-imagined it in the form of Linux. The world had a powerful new open source platform to build on.

By the end of the 1990s, IBM was nearly irrelevant to the personal computer market it had built. Microsoft had risen to nearly 90% market share. Linux had a disconcerting combination of enormous investor hype and low market share. But Linux was beginning to quietly appear in set top boxes and networking equipment. It was dominating the Internet server market. And a little company called Google was deploying thousands of Linux boxes to revolutionize search.

By the end of the 2000s, at least 10 “year of the Linux desktop” milestones had come and gone without success. Windows was still dominant, but the PC era was about to end. Apple had returned from the dead to win the hearts and wallets of consumers and developers by building OS X and iOS on an open source Mach+BSD kernel with different lineage from Linux. Linux was everywhere and nowhere: the Internet ran on Linux, yet few consumer’s browsers did.

Here in 2014, the world seems to be shifting in Linux’s direction, as long as a Linux-derived kernel is what counts. Windows 8 has alienated Microsoft’s installed base. Mobile is ascendant. Android has reached 80% smartphone share. And last year, Chromebooks rose to take 10% of the US computer market and 21% of laptop sales, outselling Apple Macbooks. Yet the Linux desktop, in the pure form of Ubuntu or one of the other distros, still shuffles along with less than 2% share by most measures.

From Megabytes to Gigabytes

In the first issue of Linux Journal I wrote “Linux needs 2MB RAM to try out, OS/2 needs 4MB, and NT needs 12 MB.”

Those Linux and OS/2 numbers were for command line configurations only. The NT number included its GUI since it couldn’t operate without it. But the march of Moore’s Law over the long stretch is stunning. In less than 20 years, we’ve seen nearly a 1000x increase in typical memory sizes.

Today, Windows 8 can run in 512 MB for just a single task or two, but 2 GB is a normal minimum. Linux is still the most miserly. 256 MB is normal for platforms like the Raspberry Pi, and running Linux on a wireless router with 8 MB of ram is not unheard of using DD-WRT micro version.

The original article focused on the issues of “Will Linux, OS/2, or NT work on my PC?”. These issues have receded as even sub-$200 PCs today can run anything. The important question has become: What’s the best fit for what I need my computer to do?

The Importance of the OS

Operating System

Simplified OS Layers

A good operating system mediates between users, applications, and hardware with a focus on clarity, compatibility, and security. It enables applications written today to work on the latest OS version, and a significant portion of the existing installed base of older versions. It enables those applications to keep working even as things underneath change over time. It allows hardware created today to support existing applications and ones not yet written. All it takes is for one essential application or piece of hardware to be unavailable on a new platform, and the user can’t move. This creates powerful lock-in to a users’ current system.

The OS also provides a UI and set of standard ways for users to get things done. Humans are flexible, but they suffer compatibility problems too. When a new UI moves things around, there’s a human cost in time and frustration. When there’s a shift in the way computers are used — from command line, to mouse and keyboard GUI, to touch GUI — not every use scenario or user shifts. We all know the command line is still critical for any IT Pro. Operating systems must either pick a paradigm, or find a way to expose similar functionality in each world.

This isn’t easy, which is part of why the OS world has fractured and diversified in the shift to touch GUIs over the past few years. As platforms and UIs have been churned, at times it seems we’ve needed a Hippocratic Oath for programmers and UI designers: first, do no harm.

And in the realm of security, protecting users and their data has become even harder post-NSA. We’ve entered a Wild West era where we have ceded moral authority to deter nations, corporations, and rogue groups from exploiting others online. It’s everyone for themselves, and a trusted operating system is at the center of that.

How are the operating systems of today doing by these standards?

Windows has had the largest financial investment in it, by far, over the past 20 years. For the last decade, Microsoft has had thousands of employees working on Windows at any given moment. Executives are obsessed to distraction with avoiding the innovator’s dilemma. Many investments are “big bets” that aim to anticipate customer and partner demand, rather than smaller steps to respond to it. Often priorities are focused on internal goals and “better together” initiatives that attempt to extend lock-in or push the user base the latest version. When those initiatives are aligned with the interests of users and partners, they tend to succeed. But many have not.

Among the unheralded successes has been the Windows Update mechanism. For users, it appears that they plug any new or old device into their Windows PC, and it just works — no digging for disks for downloads. In fact, it’s a smart cloud-based system that was implemented before the cloud was a buzzword. When a new device is plugged in, Windows automatically reads the plug and play IDs, checks its servers on the web, and downloads and installs the best available driver automatically. More than 10 years on, Linux and Mac OS X have nothing equivalent.

On the downside, the Windows 8 transition has been particularly jarring. Microsoft faced a tough choice: lose market share in the new tablet space, or create a version of Windows and an ecosystem of applications that supports tablets well. They chose to sacrifice usability as a desktop system to win new tablet users. The result is a confusing collision of two UI worlds. Some users have gone scrambling to find an alternative, driving up sales of Chromebooks and perhaps hastening the tablet shift.

Meanwhile, the Windows 8 App Store had a new API and no provision to run those apps on Windows 7 or earlier. This created another catch-22 for winning over application developers. It sped up the unravelling of the valuable application lock-in that Microsoft had established starting with Windows 3.1. A commercial application developer a decade ago would have been crazy not to design for Windows. Today, that same developer is crazy not look first at designing for a hosted HTML, CSS, and Javascript model that would allow the application to work on any platform. That should be ringing alarm bells in Redmond, given the market share of other Microsoft platforms that weren’t able to leverage the lock-in of Windows binaries (Windows CE, Zune, Windows Phone, etc.)

Security-wise, it’s important to understand that Windows 7 and 8 have perhaps the best line-of-code level security of any existing OS. Microsoft invests enormous efforts to identify and fix potential exploits before shipment. But the weekly “patch Tuesday” flood highlights how difficult it is to protect a big, juicy target like Windows. It’s the potato blight of our era. We’ll talk about Linux security later, but what protects Linux is less the quality of its code, and more the diversity of it.

After one and a half years, Windows 8 has only crossed 10 percent market share. But Microsoft still has a chance to regain its footing, given the combined market shares of XP, 7, and 8.

Mac OS and iOS

Steve Jobs warned Microsoft that trying to merge the desktop and tablet worlds wouldn’t work. And so far Apple has largely stuck to that line, with success. Millions of eager buyers are willing to pay a premium for Apple products. iOS has been passed by Android in pure unit sales, but still gets more use day to day.

Mac OS X was my main platform for much of the 2000s. It puts a nice UI on a deeply functional Unix foundation. The excellent MacPorts system gives entry to the full catalog of open source applications. Apple’s XQuartz project enables even GUI X apps to be ported. Users that care about particular scenarios like working with photos, music, and video have a platform with great support for those scenarios, because of Job’s attention to end-to-end quality.

Apple has delivered many innovations, but among the most powerful was the successful iOS and later OS X App Stores. Small developers had withered under Microsoft. By enabling software developers to make money, these app stores allowed Apple’s application ecosystem to quickly rival Microsoft’s.

Apple has historically been more willing to sacrifice the compatibility of older applications and hardware. That might have become a problem as Apple’s installed base grew. But the application problem has been mitigated with the strategy of offering free OS upgrades combined with free application upgrades from the App Store. Your current software binaries won’t work several versions from now, but you won’t care because you’ll have downloaded a free update.

Hardware compatibility, however, has often been sacrificed. One gets the sense that Apple sees a robust hardware ecosystem as competing with them, rather than a source of strength for the platform as a whole.

In security, Apple has been both smart and lucky. Smart to build on Unix. Smart to introduce strict app store criteria and the Gatekeeper feature to steer users away from untrusted apps. But they have also been lucky to stay under the radar. If Windows and OS X’s market shares were reversed, Apple would be forced to have a much higher level of focus on individual exploits, being a monoculture like Windows.

Android and Chrome OS

Google has also kept the tablet and desktop worlds apart somewhat, in the form of Android for tablets and Chrome OS for laptops, both built on the Linux kernel.

Linus’ strategy of benevolent dictatorship for the Linux kernel has delivered stable progress over the years and kept the worst decisions out. However, above the kernel, progress on Linux has been unstable and constantly disruptive… mostly to users, if not to competitors.

With Android, Google’s role has been to provide that stability above the kernel, along with the opportunity for a for-profit ecosystem of software and hardware to build around the platform. The result has been an amazing explosion of Linux-based devices.

With Chrome OS, Google has consummated the process of making the browser the OS. HTML, CSS, and Javascript are the new terminal to the cloud. For those of us who live on the web and hosted applications, there is enormous peace of mind in having no local software and few security issues to think about. Especially when we’re gifting a laptop to grandpa or our daughter. Or if we think we’re on the NSA’s naughty list.

Because Android and Chrome OS are open source, whether you’re Amazon building the Kindle or CyanogenMod, you have the freedom to take it in new directions.

Google has succeeded in hastening the end of the Windows monopoly by grabbing a significant chunk of market share with an open source operating system based on Linux. All this makes it likely that the next generation of applications will be developed with web compatibility and platform portability as a primary concern. Even if Android and Chrome OS do not continue their meteoric rise, for the moment they have mitigated much of the lock-in that would have prevented users from moving to the next innovative platform.

The Linux Distributions

The ever-churning cauldron of Linux distros and spins is the hotbed of innovation from which Android, Chrome OS, Kindle, TiVO, and countless other innovations past and future are born. On the downside, competing groups innovating in different directions has obvious consequences for some key OS characteristics: clarity and compatibility.

In practice, for hardware that is well documented, the results at the kernel level are generally excellent. If you can get your driver into the kernel, it tends to get carried forward intact. On the other hand, trying to keep a binary driver in sync with the Linux kernel is difficult, expensive, and usually futile. Linux’s strategy of sacrificing binary compatibility for the freedom to innovate and keep the kernel clean has proven powerful over time.

There are some exceptions, such as graphics which touches many layers of the stack. The transition to compositing desktops has hurt performance on Linux scenarios where a beefy GPU isn’t available. And the more functionally complex KMS/DRM driver model has had many impacts on users. OS features like support for multiple monitors that require participation at many layers have been difficult to achieve.

At the application level, the story is more mixed than the kernel. Competing libraries, versions, and package managers make it difficult to port even open source applications to every distribution. There are many potential libraries to become dependent on, and each dependency may evolve in ways that will demand that your application change or bifurcate to keep up.

Some distros focus on minimizing churn and compromising by providing facilities such as a commercial app store. Ubuntu and Red Hat Enterprise Linux are best known as playing this role on the desktop. When that plot has been lost, back to basics distros like Linux Mint have won converts.

Amazon m1.small EC2 pricing Linux vs. Windows

Linux $0.060/hour
Windows $0.091/hour

In the cloud, Amazon has a growing list of applications in their marketplace built on Linux (often RedHat or Debian based). You can see Linux’s customizability, maintainability, and lack of licensing cost at work in how Amazon prices Linux-based hosting vs. Windows hosting. Linux is a more cost-effective choice.

Can the Linux distributions do a better job for end users? Yes, definitely. Branches like Ubuntu’s effort to replace X with Mir should not be taken so lightly. It would be a boon to Linux adoption if there were more efforts to consolidate or combine projects, to present a unified front to applications and end users. Is the universe forever expanding until we’re all on isolated islands, or will it consolidate? It’s a delicate balance. We could start with a joint development conference between the Canonical and Red Hat teams to find common ground.

On the plus side, Linux’s software and hardware diversity make it difficult for mass exploits. And more fundamentally, open source is essential for trust through transparency. Any individual, nation, or corporation can do their own code review to weed out potential exploits. While not a panacea for sophisticated, targeted attacks, these characteristics allow Linux to deliver the best security of any platform in practice.

The Future

Despite the challenges, the future for Linux is unstoppably bright.

Beyond the desktop, Linux leads in servers and has completely conquered the embedded market. It’s impossible to conceive of any one company being able to port a commercial OS to as many CPU and device architectures as Linux has. As the embedded world transitions from 8/16-bit processors to 32-bit, billions of new devices will be running Linux inside.

Even today, an average Microsoft or Apple executive is probably (knowingly or unknowingly) running several copies of Linux in their home: perhaps their TV, router, thermostat, or the coming generation appliances and vehicles.

Linux is currently winning the mobile race with Android, and grabbing significant “desktop” share with Chrome OS. Efforts such as CyanogenMod, Firefox OS, and Ubuntu for phones will keep creating innovative options. And as applications and users increasingly live within the browser, the barriers to move between platforms will continue to fall.

The revolution that Linus started 20 years ago is accelerating. “World domination” which was previously a sly joke, now appears inevitable. Happy birthday, Linux.

Originally published in Linux Journal Issue 239 March, 2014. Bernie Thompson is a former IBM and Microsoft OS developer, and a Linux kernel contributor. He is the Founder of Plugable Technologies: a USB, Bluetooth, and network devices company.

]]> 0
Elegant combination of AC power and USB charging Mon, 15 Dec 2014 15:04:10 +0000 main_512With portable devices being the center of technological lives, we’re always needing a place charge them. The fight for AC outlets and USB charging ports can be a matter of life and death for your phone, tablet, laptop, etc. Having a universal multi-port USB smart charger like our USB-C5T in the home is great to replace all of your different chargers, but what about your laptop or tablet that can’t charge off of USB? You’re still stuck hunting for an AC outlet under the desk or behind the couch.

The PS2-USB4 can help solve these frustrations by allowing up to 4 USB devices to charge simultaneously while providing convenient desktop access to dual widely spaced AC outlets, perfect for those pesky wall wart style power adapters. The four dedicated smart charging ports automatically select the best charging mechanism for nearly all the competing USB charging methods on the market (BC 1.2, Apple 1A, 2.1A and Samsung 2.4A). Your devices will charge at the fastest possible possible rate. For some devices like the iPhone 6 and 6 Plus, it will charge faster than their bundled power adapters. Each port can charge a device at up to 2.4A with 6.8A, 34W total to share between the 4 ports. More than enough power to charge 3 iPads and an iPhone at the same time.

2-product in use 1

The sleek matte black design, compact size, and unique desktop form factor blends nicely with other high end electronics. Included is a removable 6ft power cable that makes it easy to create a perfect desktop power center for the home or office.

If you have any questions at all, please comment below or email We’re happy to help!

]]> 0
Having issues with your Surface and blank DisplayLink screens? We can help! Wed, 10 Dec 2014 02:12:12 +0000 surface

Update: In mid-January, Microsoft pushed a “Surface System Update” via Windows Update which contains new firmware and drivers for the Surface series. These updates appear to have resolved the blank-screen issues that some users were reporting with DisplayLink.

Since mid-November, a small percentage of Surface Pro 1/2/3 users have reported issues with blank screens on their DisplayLink displays after rebooting from the installation of Windows Updates. In the cases that we’ve seen, things appear to be functional at the software level – Windows still shows the USB-attached displays as present and active, but the displays themselves remain blank.

If you’re experiencing similar behavior but do not have a Surface, the behavior is likely caused by a different issue. On Surface tablets, the Intel GPU driver version when the problem is present is dated 3/7/2014, though most Surface users with this driver revision are not likely to encounter the behavior described above.

We’ve been able to solve the problem in most cases by manually installing a newer version of the Intel HD Graphics driver. We’ve posted step-by-step instructions along with a video of the driver update process here.

For customers with Plugable products, please don’t hesitate to email us directly so we can work with you to resolve any remaining issues.

]]> 3
Throwback: Market Making for the Bazaar Wed, 10 Dec 2014 01:28:06 +0000 Originally Published in Linux Journal Issue #64, August 1999

20 Years ago, 15 years before founding Plugable Technologies, Bernie Thompson wrote an article for the first issue of Linux Journal. Over the next twenty years, LJ published 4 more articles including a 20-year retrospective in 2014. Thanks to Linux Journal’s generous terms allowing authors to keep their copyright, we’re able to republish the full series of articles here. If you have your own memories and opinions of the time, please share in the comments. This is Article 4 of 5.

In Making Money in the Bazaar (June 1999), we raced across the landscape of current efforts to drive innovation and make a living in the Open Source market. We now introduce a system for consumer-driven Open Source funding. If successful, it could accelerate the pace of innovation even further and create a small industry around developing free software.

The Need

cover64.smallIs there some graphics hardware you wish Linux supported better? A game you wish was ported from Windows? Or possibly a GUI application to ease some parts of system setup?

If so, what do you do about it? If you are a developer, you can just write it. The Linux model has been “scratch your own itch” and contribute back to the community.

If you aren’t a developer, don’t have the time or the necessary skill—you’re out of luck. You have to wait and hope that some other sufficiently motivated developer with the same need will take on the project.

This can be quite frustrating—you need that graphics driver now. You could speed things up by hiring someone to develop the software just for you. But paying a fortune to have a custom driver developed for a $50 graphics board just isn’t feasible.

A larger group is needed to share the burden of developing a standard driver. Several thousand people in the world probably use that same graphics card and run Linux. Why don’t some of them get together and share the cost of having the work done?

The same concepts apply to scripts, help files, applications—anywhere there is a demand for software.

A Buyer’s Co-op for Open Source

The Internet is an amazing tool for bringing specialized communities together. A group of people needing the same software is just such a community.

The trick is to attract all of the interested parties to the same web site, where they can pool their resources with others wanting the same thing. This site must coordinate the process of gathering support, selecting a developer, evaluating the resulting software and collecting the funds to pay the developer.

The Free Software Bazaar

Interview with Axel Boldt

The Free Software Bazaar gathers bounties for the completion of particular Open Source projects. It can be found at I talked to Axel on April 27, 1999.

Bernie: What inspired you to create the Free Software Bazaar?

Axel: In a Usenet discussion, the question came up of whether buying a Red Hat CD is a good way to sponsor free software development. I then got the idea that there may be a better, more direct way to induce people to write free software—to “cut out the middleman”.

Bernie: Is money important for the free software movement?

Axel: In the grand scheme of free software development, money does not currently play a big role. Personally, I am not at all unhappy about that fact. I like to think of the Bazaar as a place where programmers can get ideas for projects that are actually needed, and where users can show their appreciation for the wealth of software they get for free.

Bernie: Why would any individual commit money towards funding an Open Source project?

Axel: Two reasons: you need a certain piece of software and you think the free software development model would produce the best results, or you feel the need to “give back” to the free software community. Most of the time, it’s probably a combination of the two.

Bernie: What does the system offer for developers?

Axel: Some money, but mainly ideas for new worthwhile projects.

Bernie: What effect could cooperative funding have on the Open Source community at large?

Axel: It may help establish better communication between users and producers of free software. Users will be able to outline a wish list for a new project as opposed to just provide feedback and patches to already-existing code.

Bernie: What is your background?

Axel: I maintain the Linux kernel configuration help texts, the Linux CD Giveaway List ( and the programs tkinfo and WebFilter. I teach mathematics and computer science at Metro State University in St. Paul, Minnesota.

Bernie: What are your plans for the future?

Axel: Retire early and keep playing with free software.

Axel Boldt’s Free Software Bazaar was the first realization of these ideas. It opened in the fall of 1998; within six months, it collected over $25,000 US in offers and over $1200 in payments towards Open Source projects. The site works by letting users browse a list of existing offers. If a user is interested in sponsoring an existing project or creating a new one, they send e-mail to Axel. He then adds their offer to the listing page.

These offers can be claimed by the first developer to successfully complete the work. Axel then notifies the sponsors, asking them to send payment directly to the developer.

The Free Software Bazaar is a great service to the Open Source community. However, a huge ongoing effort is required to maintain momentum and grow the movement into something powerful.

For cooperative funding to become a significant force in the Open Source market, the achievements of the Free Software Bazaar must be multiplied many times. Managing the demands of so many parties is a difficult problem. A cooperative funding service needs to be innovative in solving the confidence and communication problems between sponsors and developers. It should be convenient and simple. It must be professional and build a strong record of trust. In the end, it is essential to attract and maintain a critical mass of sponsors and developers. is an attempt to create a service that meets these demands. It is a commercial enterprise created to provide the range of services required to make cooperative funding a success for buyers, developers and the Open Source community in general.

Screenshot of taken later in October, 2000

Screenshot of taken later in October, 2000

It intends to:

  • Publicize to better achieve a critical mass of sponsors and developers for each project.
  • Have staff work with corporations directly to encourage sponsorship and development of open source.
  • Provide VISA/MC/AMEX credit card processing to make payment convenient (especially for non-US sponsors).
  • Automate as much as possible with HTML forms and a database back-end.
  • Provide a stable web address and organization for long-running projects.
  • Foster trust between buyers and developers through simple, standard and legally binding on-line agreements.
  • Provide cash advances to developers with a proven track record when they begin work.
  • Provide web space to trumpet the financial contributions of corporations and individuals towards particular projects.
  • Prevent duplication of effort by having sponsors collectively hire a single developer to work on a project.
  • Allow sponsors to back out any time until a developer is finally selected.
  • Allow full control for sponsors to select the developer, development schedule, source code license, etc. that suits them.

If all these goals can be achieved, cooperative funding will provide effective answers to the questions, “How does one make a living on free software?” and “Who is motivated to innovate?”

The answers are that innovation is funded directly by users who pay for new features, which in turn supports a small army of independent software developers.

How Does It Work for Sponsors and Developers?

It starts with an unfulfilled need. Maybe it’s a driver for a USB scanner, a plug-in to convert Excel spreadsheets or a port of some game to Linux. The user goes to and finds the project to develop this feature. If it is not there, they can add it with a form.

What is it worth to them for someone to develop this software? Whether it is $10 or $1000, the customer sponsors the project for that amount. This is not something done lightly. A buyer is making a firm commitment to pay up if the software is developed.

Other motivated sponsors come along and do the same. CoSource goes out to corporations and seeks to supplement individual sponsorships with a few large ones. Let’s say the project is an HP scanner driver. While HP isn’t yet ready to pay the full cost of developing a Linux driver, they may be willing to pay for 50% or 25% of the effort.

Developers, meanwhile, browse these same lists to identify projects in their area of expertise. Suppose a developer has done a converter for the Excel file format before. That developer fills in a form to bid on the project, answering the following questions:

  • How much would I have to be paid to do this work?
  • What is my most conservative estimate for time to completion? What license would it be released under (e.g., GPL, BSD, etc.)?
  • Who will judge the final product for completeness and quality (e.g., a known and trusted third-party authority)?
  • What will be the URL of the project’s web page?

CoSource then prices the bid for display on the system, marking up the bid for transaction costs, historical sponsor fraud, advance payments, project risk, etc.

As bids are entered, sponsors are notified to evaluate them. They submit a simple yes/no form in response. Voting yes to one or more bids is a final commitment involving a legal agreement to follow through if this developer succeeds. The first bid that gains sufficient sponsorship wins. From that time forward, sponsors are not permitted to back out and shortchange the developer.

The winning developer then begins work on the project, providing updates on their project web page.

At some point, the developer believes he is complete. A release is done and judged to determine if it matches the requirements stated in the original project description. If the release fails, the developer may try again many times until their committed schedule runs out. If that unfortunate event happens, the project goes back to the sponsor/bid phase.

If the release passes, the project is complete! Sponsors are notified to fulfill their commitments. They can easily pay by credit card. Finally, payments are consolidated and a single check is mailed to the developer.

What Are the Goals?

Obviously, this process is more complicated than a typical software-buying experience. In return, the consumer gains much more control over the quality and time frame of work. If you needed one of the new features Windows 2000 provides, you would have to wait two to three years after the initial promised ship date to get them. How can a corporation plan ahead for software rollouts with such uncertainty?

Cosourcing puts more control over features, schedule and quality in the hands of the consumers.

Obviously, this system is not intended for charity or non-profit activity. Rather, it is intended to be the most effective way to outsource development of software and share that cost with other motivated buyers. It is intended to be a way for non-developers to “scratch their own itch”. It is intended to be a fertile breeding ground for hundreds of individual and corporate developers. It is intended to make the funding of Open Source a collaborative effort in the same spirit the development process is in today.

In general, it is intended to empower end users to spend their hard-earned money making free software do what they need.

Making a New Market

On one end of the software market spectrum is “closed” software, where intellectual property is licensed on a per-copy basis. On the other end is free software, where intellectual property is created without payment and voluntarily given to the community at large.

Both of these will continue to grow and thrive. On one hand, closed software will continue to be a billion-dollar market. On the other hand, innovative free software will continue to be developed by students, hobbyists and professionals for various reasons. Both systems make sense and they will continue to compete with each other. But there is a possibility for a vibrant third market. One which blends and bridges the differences between the other two. One that brings the free market to free software.

This market will sell software as a service rather than a product, so it will be compatible with Richard Stallman’s original and ongoing vision for public license code. This market will serve the needs of end users by driving innovation in the areas that matter most to them. This market will bring financial vitality to free software, so thousands of individuals and small companies can make their living developing it.

Software in 2010 — A Look Into the Future

With the rise of the Internet and the Open Source movement in the late 1900s, the basic building blocks of software—operating systems and libraries that applications build upon—started becoming a collaborative effort. At first, few people believed that great software could result. Few believed that a rag-tag collection of individuals and companies, working in parallel, could produce a great platform to build upon. But they did it.

Then, at the turn of the millennium, markets sprung up to collaboratively fund these same projects. Drivers, scripts and middleware to connect Open Source with every kind of software and hardware device were developed. Many small but frustrating problems were fixed. Open Source was now the most interoperable software platform available, and it was getting all the polish and customization needed to appeal to the full spectrum of end users.

Open Source did not win out completely. Rather, the result was intense competition between closed- and open-source platforms that drove accelerating innovation for all.

In recent years, the open model has gone on to tackle problems beyond the platform: highly parallel problems that require a huge collaborative effort; projects that require complete openness and collaboration; efforts that are beyond the resources of a single corporation; modern pyramids of software.

In 2004, the first of these successes—the Interling project—was completed. It is, of course, the software we use to translate hundreds of written and spoken languages to and from the common Interling language. Dozens of programmers were required for each dialect to produce the complex grammar-processing codes. The project was possible only through the participation of thousands of programmers worldwide, with work on each language funded by motivated individuals, corporations and governments.

In 2006, we completed the initial work of the Historica Humanica project. Every piece of writing, every painted canvas and every available oral history was scanned and entered into our huge searchable database. While not every individual has or will publish a full autobiography, many have willed that their invaluable memoirs be made available at their death. What can we learn from history? We’ve found we can learn much, especially at the personal level. The human psyche has not changed dramatically over the ages. We are now able to search our records for others who have felt the same pain or dealt with the same concern. In these writings, we have found perspective and understanding to guide our path forward in everyday life.

Now, in these last few years, we’ve begun to tackle the most daunting effort yet—the Neuroscape project to approximate and emulate the human brain. We learned early in our AI work that no one simple algorithm can replicate the wonder of the human brain. Rather, the brain is made up of millions of flexible, evolving rules and guidelines the equivalent of billions of lines of software code.

It is a project we can hope to achieve only through the most massive parallel effort ever undertaken by humankind…

This vision may turn out to be a pipe dream. Consumer psychology has rarely dealt with a case in which a group of consumers pay for the development of a product, then allow that product to be given away freely from that time forward. Psychologically, this is a strange mix of self-interest and altruism. and others are going to put it to the test. If you’ve ever complained about some missing piece of free software, now you can put your money where your mouth is. Will you?

Originally Published in Linux Journal Issue #64, August 1999

]]> 0
Throwback: Making Money in the Bazaar Mon, 08 Dec 2014 19:00:02 +0000 Originally Published in Linux Journal Issue #62, June 1999

20 Years ago, 15 years before founding Plugable Technologies, Bernie Thompson wrote an article for the first issue of Linux Journal. Over the next twenty years, LJ published 4 more articles including a 20-year retrospective in 2014. Thanks to Linux Journal’s generous terms allowing authors to keep their copyright, we’re able to republish the full series of articles here. If you have your own memories and opinions of the time, please share in the comments. This is Article 3 of 5.

cover62.smallOpen Source is software which has been freed. It allows bits to be copied and reused endlessly. It allows inspection of the source code. It allows new innovations to be built upon old, without having to duplicate past efforts. It is free software with the emphasis on freedom.

This past year has seen an explosive rise in visibility for this curious market. The computer world at large has gained at least a limited understanding and respect for its workings. Much of this attention would have been unimaginable even a year or two ago.

During this time, Open Source has been put under heavy scrutiny. While certain technical benefits are undeniable, every analysis invariably confronts two simple, critical questions: “How does one make a living on free software?” and “Who is motivated to innovate?”

The strength of the answers to these questions will determine if Open Source will achieve its full potential for the greatest possible audience. It must be economically viable.

I will attempt to answer these two questions by surveying the field of current business models and analyzing their financial strength. I will also speculate on future innovations that may alter these dynamics.

Business Models

Money rests on the axiom that every man is the owner of his mind and his effort … Money permits you to obtain for your goods and your labor that which they are worth, but not more … Money is your means of survival.

—Ayn Rand, Atlas Shrugged

The obvious challenge of Open Source is that it may be copied freely, even if purchased initially. So a $10 Linux disc may legally be used to install one machine or a thousand machines. At first glance, it would seem no incentive exists to put effort into improving such a product. Because of this characteristic, Open Source is often equated with a kind of communism: a system that offers something for nothing and exploits the labor of others without rewarding them; in short, a system that is unsustainable because it causes people’s self-interest to conflict with the greater good.

These concerns should not be dismissed out of hand, nor taken as factual. The truth is much more complicated. Central to these concerns is the lack of exclusive copyright protections. Copyright and patent laws are not inherently part of the free market; they are intended to create limited monopolies for the companies which own the rights. This is done to reward research and to encourage innovation.

Open Source is a voluntary system that waives exclusive ownership of software in exchange for other benefits. These benefits include wider adoption, faster collective innovation and a level competitive playing field. This makes for a frictionless, dynamic and highly competitive market without the very profitable “vendor lock-in” that is facilitated by traditional copyrighted software.

Despite the resulting competitiveness, several business models have proven to be profitable. These models leverage the unique new possibilities afforded by Open Source, in return for their sacrifice of certain copyrights.

What is still unclear is how these models will generate as much innovation and value as traditional software companies, given the handicap that a person’s work can benefit his competition as much as it benefits himself. As we’ll see, one of the surprising things about free software is where the innovations have originated.

In the following sections, I’ll introduce the markets that are producing innovation and jobs today. These are the research, service and customization economies and the many business models that fall into these groups. Nearly all companies are hybrids of several different business models.

The Research Economy

Open-source corporations are important growth engines, but to date, they have built mostly upon the efforts of others. The bedrock of the market is the thousands of individual students and moonlighting professionals who make small and large contributions every day.

These developers are not paid for their efforts. They begin a project with no promises or commitments. They work at their own pace, use their own judgment and set their own priorities. They are the university researchers and basement scientists of software, working together to make their contribution to the world.

Often, these developers are only learning or honing their craft, so many projects fail. Yet out of this soup of individual and group efforts rises some of the best software available today.

Through the Internet, these successful efforts can be instantly copied and put into use world-wide. They can be enhanced and customized by thousands of others. They can continue to evolve like an organism, adapting to new software and hardware architectures as the years go on.

The first and most unshakable answer to “Who will innovate?” is the students and moonlighters, motivated by their desire to learn and create and inspired by the energy and clarity of tackling new problems. The profit-oriented market may fail, but these software research activities will go on. Slowly, surely, they will continue to add to the body of free software available to the world.

Yet despite the best efforts of the students and moonlighters, their software has common flaws. Development goals are driven by the author’s own needs, resulting in software “by developers, for developers”. The threshold at which a developer is satisfied with ease of use is much lower than for typical users. These are truly research projects, with all the beauties and warts that implies.

The Service Economy

In the cases where beauty has outweighed warts, a critical mass of technical and non-technical users has been built. Apache, Linux, Perl and many other programs have made this breakthrough to mass market utility. This expanded base of consumers has driven the need for many support services built around the software. These services add polish and value to the base provided by the initial project.

No one is required to pay for any services. Given only a Net connection, they can download what they need and figure everything out themselves. However, many consumers find their time more valuable and therefore seek services to make their lives easier. This is where “commercial” Open Source steps in to distribute the software, provide technical support and educate users.


The Internet is great for downloading small software, but for larger products it is too slow. Also, finding the software you want among the jungle of projects on the Web can be difficult.

From this need rose companies like Red Hat, Caldera, SuSE, and Walnut Creek. On one convenient CD-ROM, you have an organized collection of the best available software applications. These companies are achieving significant revenue and earnings with these services.

Opportunities exist for further specialization. LinuxPPC has made a solid business out of focusing on the PowerPC market. A small company could take this further by taking one of the most popular (Dell, Linux, eMachines) PCs and producing a distribution that is tailored and tuned to that hardware. It could guarantee that all devices are recognized and install flawlessly. It could optimize every program to the particular processor used on that machine. You could imagine Intel co-marketing and co-developing with a software company to optimize for their latest chips.

Open Source can be a key in the drive towards mass specialization of computer products and services. As the overall size of the market increases, more opportunities will be created in these small niches and sub-niches. All of this is made possible by the full access to source code that free software provides.

Technical Support

When a single company owns exclusive rights to a software product, it is obvious where the most informed technical support comes from: if you buy a Microsoft product, you go to Microsoft for technical support.

The fact that Open Source does not have an exclusive support provider has repeatedly been portrayed as a weakness. This is a fundamentally flawed notion. Rather, Open Source allows a whole market of support providers to compete on a level playing field of equal access to the code.

Through this heightened competition, the level and quality of support is capable of rising above the best standards of today’s closed software market.

Red Hat, LinuxCare and many other companies and individual consultants have stepped up to serve this market. Early in 1999, IBM recognized these extraordinary opportunities and announced Linux support and consulting services. The competition between these companies will become intense, and customers will be well-served.

If open-source support services can achieve their full potential, it will become a major selling point for corporate users and consumers. Innovations in providing these services will provide the foundation for many viable new businesses.


O’Reilly & Associates has built a booming book publishing business which topped $40 million in 1998. More than half of this revenue was from books about free software topics.

As the market grows in size, more educational services will be needed. These are significant opportunities, since any educator, author or consultant can delve into the inner workings of the code to produce definitive training materials for a subject. By working on and teaching about specific areas, a valuable reputation can be created. (See Table 2, Linux Consultant Survey.) Several of the most successful consultants built their businesses by being a recognized world-wide expert in a particular technology.

The Customization Economy

The next step beyond servicing existing software is the creation of new applications to solve outstanding problems. This may be in the form of hardware devices that come preconfigured for a particular need. Or it may be through employees or consultants who configure and enhance software for particular needs.

In a world dominated by a single vendor, there are limits to the innovations a new product can provide because of high prices, too few features, too many features, logo requirements, etc. Many interesting new applications are suddenly possible when these shackles are removed. You just need freedom to customize.

Hardware Bundles

Hardware preloads and bundles are some of the most compelling uses of free software, because the cost of developing or enhancing free software for the machine can be included in the price of the hardware.

One example is the Cobalt Qube This is a space-age blue 18.4×18.4×19.7cm server appliance running Linux on a RISC processor. It is a general purpose workgroup server for e-mail, Web, etc. Having full access to the Linux source code gave Cobalt the capability to fully customize the software for this uniquely simple but very powerful hardware platform. (See “Cobalt Qube Microserver” by Ralph Sims, October 1998.)

Another is the Snap! network storage server from Meridian Data ( It’s a fixed-function server appliance that shares disk space on the network. It is built from custom hardware combined with open-source software. Consumers don’t need to know it uses free software; they just need to know what it does. Customers expect the price of network storage to scale with the price of disk storage, so the hardware and software costs of using a proprietary software system could have greatly reduced the attractiveness of the product.

Obviously, one big advantage is having no per-device software royalty. This is particularly true for price-sensitive, high-volume products. In a few years, we may find dozens of companies embedding open-source operating systems and applications on millions of small, fixed-function hardware devices.

IT Professionals

Beyond hardware devices, there is a need to customize and adapt software applications to the exact business processes and needs of an organization.

This always requires some custom work. Most medium and large organizations have a crew of IT professionals whose job is to customize hardware and software to make the business run more smoothly. These professionals like to start with the most functional products possible and customize from there. This has meant proprietary software in most cases.

Recently, open-source software has achieved levels of functionality that match proprietary software in many cases and has the advantage of not being tied to one vendor for support or product updates.

Rather quickly, it may become cost-effective to customize free software, rather than pay for thousands of licenses of commercial software on which to build. This shift in the market will require a growing number of professionals who specialize in open-source software.

This is perhaps best reflected in the salaries of IT Professionals. A 1998 salary survey of 7189 professionals asked which operating system they primarily used. Of those who reported Linux as their primary OS, their salary was $61,027 US vs. an overall average of $60,991. Linux salaries had increased 16.5% from the previous year, representing the fastest salary increase of any system (source: Sans Institute).

In-house staff is not the only option. Again, because of the freedom to inspect and study the software down to the lowest levels, a competitive industry is able to grow to serve whatever needs arise. The resulting alternative to in-house staff is a competitive market of independent consultants.


When the cost of the software goes to zero, the value is in customizing for specific problems. Consultants already make their living providing these per-hour or per-project services. Open Source is not a sacrifice; it is an opportunity.

One example is comprehensive support. Most business want a single point of contact to take full responsibility for getting a project done. With closed source, contractors are at the mercy of bugs and limitations in the operating systems and applications they purchase. In effect, they cannot guarantee success. They do not have full control of the technology.

With Open Source, they have complete access to solve every problem, no matter what level or layer it occurs in. A small company with a skilled force of engineers can provide a level of comprehensive application-to-operating system service that only IBM or HP or Sun can provide today, and probably at much lower prices.

To get an understanding of the size and health of the Open Source consulting market, those registered in the Linux Consultants HOWTO were surveyed. They were asked the following questions:

  1. How many consultants at your company are involved with Open Source work?
  2. Approximately how much money did your company (or yourself, if independent) earn in 1998 on Open Source-related work? (Convert to US dollars)
  3. In 1999, based on numbers from recent months, how much do you expect this to increase/decrease? (as a percentage)
  4. In 1999, do you believe it is possible to make a living doing Open Source consulting work? (yes/no)

This is a very diverse group of VARs, integrators and consultants. Over 50% are from outside the U.S., where the cost of living may sometimes be lower. In most cases, open-source work is just a piece of the total business. While this is certainly not a scientifically rigorous study, it does give some flavor of the market.

Table 2. Linux Consultant Survey

Number of Responses: 79    
Median 1998 Earnings per Consultant: $15,000 US    
Minimum Earnings per Consultant: $0 US    
Maximum Earnings per Consultant: $312,500 US    
Median Predicted 1999 growth: 70%    
Possible to make a living in 1999: 77%

A key point from the survey is the importance of being a “jack of all trades.” You must focus on serving the needs of the customer, including doing work on closed source. In 1998, the median earnings per consultant on Open Source alone were not enough to make a living, and only 12.7% of the consultants made more than the $61,027 salary of IT professionals mentioned above. Business has picked up dramatically in recent months, however. As a whole, the consultants were very bullish on the coming year.

In the previous sections, we’ve covered the current business models that provide a living for employees, and innovations for consumers. There are certainly strengths, but the market is still tiny compared to traditional shrink-wrapped software. Young companies with new ideas are needed in order to grow the market.

Funding New Companies

Capital is the fuel for companies that will serve any new market. This money may come from the on-going operations of the business or from banks or investors. What is the current environment for getting this funding?

Venture capitalists, the investment partnerships that fund high-risk/high-return companies, are skeptical so far. Their analysis of these opportunities keeps coming back to a critical point: Open Source, by definition, eliminates the barriers of entry to a market. How can a company build a sustainable market advantage if their work can immediately be used by a competitor?

Table 3. Open Source Venture Funding

Red HatunknownSept 1998Greylock, Benchmark Partners, Intel, Netscape
Sendmail$7MFall 1998Silicon Valley Band of Angels
Cygnus$6MFeb 1997Greylock, August Capital
VA ResearchunknownFall 1998Sequoia Capital
ActiveStateunknown1997Tim O’Reilly
Scriptics$400KJan 1998Advance Sales

Given this limit on the upside, only a limited number of open-source companies have received funding. These companies have identified key factors to protect them from competitors. For Red Hat, it is a strong brand. For Sendmail, it is having an open/closed mix in their software product line. For a company like Cobalt Networks, it is combining closed hardware with open software. As this market matures, more companies may achieve profitability and attract investment dollars for everyone.

Until then, companies must bootstrap themselves. Ironically, this is feasible because of those same low barriers to entry that scare off investors. An open-source company can build on the past efforts of others, meaning less capital is required to start the company.

Problems to be Solved

In summary, what are the problems that companies must solve in order to grow the market in new directions?

  • The financial motivation for innovation must be stronger. Most of the current successful business models other than consultants make money off “secondary” services, rather than the software development itself.
  • Open Source is still largely “by developers, for developers”. To achieve mass market success, it must become more customer-driven and consumer-friendly.

Traditional software products harness the free market to solve these issues. Consumers pay to buy a software product if it meets their needs, which means it must be very polished. Successful products are profitable for the companies that create them. Unsuccessful products die off. Through these mechanisms, good developers make a living and consumers get good choices.

Open Source needs to create systems to provide these consumer checks and balances.

The Search for Solutions

The business models described throughout this article are by no means a comprehensive list. This is a young market we are only beginning to understand. It could yet defy the skeptics and evolve into something that serves customers better and is financially strong.

Part two of this series will explore one particular possibility in this universe of interesting but unproven ideas—a consumer co-op for software contracts. It uses the Web to let consumers commit funds up front to pay for the development of specific applications, feature enhancements or bug fixes critical to them. Resources are pooled, so each person pays only a small portion of the total cost. It is a system compatible with, and tailored to, Open Source. I will analyze this idea in detail, describe an attempt to create a web service which provides the necessary mechanisms and speculate how this system might affect the progress of the open-source market.

With this idea and many others, the open-source market is a fascinating mix of possibilities and dangers. In recent years, it has grown from thousands to millions of users. Several profitable companies are now serving the needs of these consumers.

The next few years will certainly see continued innovation from the open-source research community. From the business side, it remains to be seen whether the current momentum will continue or be struck down by market realities. It may very well depend on the innovations created by the upcoming generation of open-source entrepreneurs. It’s a free market. May the best products win.

Originally Published in Linux Journal Issue #62, June 1999

]]> 0
Throwback: The World Wide Web Sun, 07 Dec 2014 02:05:11 +0000 Originally published in Linux Journal Issue #3, June-July 1994

20 Years ago, 15 years before founding Plugable Technologies, Bernie Thompson wrote an article for the first issue of Linux Journal. Over the next twenty years, LJ published 4 more articles including a 20-year retrospective in 2014. Thanks to Linux Journal’s generous terms allowing authors to keep their copyright, we’re able to republish the full series of articles here. This particular article is quaintly dated now. If you have your own memories and opinions of the time, please share in the comments. This is Article 2 of 5.

cover3.smallGeneticists share genome data with colleagues. Fans of Anne Rice talk about her latest books. Hundreds of programmers team up to create a free Unix system called Linux.

By enabling individuals around the globe to communicate and cooperate, the Internet has sped up the pace of scientific innovation. With 20 million users and growing, it has created a culture based on instant information and hyperkinetic communication.

The challenge now is to expand the power of the Internet to a wider audience and make it more convenient for all. Rising to this challenge is a system called the World Wide Web. By bringing all of the Internet’s old and new resources together, the Web stands to become the one simple, standard way to access all of the Internet’s riches.

It stands to revolutionize the revolution.

What is the Web?

To picture the World Wide Web, imagine a page from a book. By pressing your finger on any of the words, you receive a new page with more detailed information about the subject selected. The Web is like a huge book being constructed on the Internet.

Tens of thousands of these pages are already scattered around the world. They are created by experts and novices of all disciplines who use the Internet. The amount of information available is stunning: an encyclopedia, a dictionary, world maps, complete information on US government agencies, extensive Linux documentation, and much much more. It’s an amazing source of information.

How does it Work?

The Web is really a collection of computers, hooked together over the Internet, that pass pages of information back and forth. A program called a Web browser is used to find and view information. When you select a word in your Web browser, a message goes out to another computer somewhere in the world. That computer will respond by sending back the page you requested.

The messages between computers are encoded in the form of a Universal Resource Locator. The URL describes what kind of information you want, and where to look for it. It’s like a mail address for the Web.

The pages can contain text, images. sounds and more. A page can even contain fill-in-the-blank forms which you complete to send information off to another computer. A protocol called HTML describes, in general terms, how the text, images, and forms should be positioned on the page. Web browsers can use this general description to lay out pages in different ways. For example, a browser that works in text mode only can ignore all the images. Or an X Windows browser can shift the text and images of a page when the window is resized.

URLs describe the locations of a page, and HTML provides an adaptable description of the information. Together, they make the Web an extremely flexible system.

Why Use the Web?

To everyone familiar with Internet newsgroups, FTP sites, and gopher servers, the Web may not seem groundbreaking. But the impact is, in fact, dramatic.

The most important advantage is simplicity. To maneuver around libraries of information, you need only click the mouse to learn or key combinations to memorize.

The Universal Resource Locator also provides an easy way to access other forms of Internet information such as gopher and archie. This way, there is only one, simple interface used to access a multitude of resources. New users benefit most from this simplicity.

Unlike newsgroups, Web information stays around as long as the author, or anyone else, wants to keep it available. Unlike information available vie FTP, the information is cataloged and categorized, so it is much easier to find and access. Unlike gopher, the information is laid out like one gigantic book. You can read what you like, and select more detail only if you’re interested.

Many have predicted that cyberspace would become a chaotic mess where information is impossible to find. The World Wide Web is a system designed to bring order to the chaos.

Linux and the Web

Linux is one of the best systems, commercial or otherwise, for access to the Web. Linux is a version of Unix, the native platform of the Internet. So the best Web tools are often available on Linux before DOS, Windows, or Macintosh.

Two of the most popular tools for the Web are Mosaic and lynx. They are Web browsers with different goals. Lynx is small and text-only. Mosaic is large and full-featured. To browse the wealth of information on the Web, just pick one, install, and wander off to explore the wonders of the Web.

Lynx will display the text of a page without any images or fancy fonts. It only requires the keyboard arrow keys for navigation. This simplicity makes it fast and flexible. Lynx uses the VT100 protocol, so people without access to X Windows can still use lynx, including users logging in to a Linux system remotely.

Lynx is miserly with disk space and memory. It will only take up 320K of disk and consume about 620K of RAM while running. This makes lynx perfect for 4MB Linux workstations.

Lynx is not as common as Mosaic, however. This is primarily because lynx doesn’t support the colorful images and different fonts that make the Web so expressive. For these, Mosaic or some other graphical browser is required to experience the full richness of the Web.

Lynx is freely available. To get lynx for Linux, check the Linux Software Map. For lynx source code, FTP to

In contrast to lynx stands Mosaic. Mosaic is probably the best Web browsers available, and it is certainly the most famous. It takes full advantage of the Web.

Mosaic is a graphical browser, so it will only run on systems with X Windows (or MS Windows or Macintosh). A number of companies such as Quarterdeck have licensed the free Mosaic for commercial use rather than write their own. Mosaic is truly a high-quality application. But the price for beauty is more memory and disk space. Mosaic consumes over 1.3MB of disk space, and will use about 2MB of RAM while running.

Linux lynx, Mosaic is freely available. This freedom is slightly restricted, however, because Mosaic uses Motif programming libraries. This means you cannot recompile Mosaic yourself without a license for the Motif libraries.

To get Mosaic, search for it in the Linux Software Map. A copy of Mosaic can be found at and most other Linux sites. Assuming you have X Windows, no special installation procedures are needed. Just prepare to be impressed by a very professional program.

Using Linux to Contribute to the Web

DOS, Windows, and Mac users can happily access the Web with a browser. But the extra dimension Linux adds is the ability to become part of the Web. The ability to make information available to others via a Web server.

Setting up a Web server is certainly more difficult than using a browser, but is still surprisingly easy. This ease of use allows a large number of Internet users to contribute to the Web project. And Linux makes a great platform for a Web server because of its small size and speed. With the growing body of Internet Linux users, the potential is enormous.

Httpd is a popular Web server available for Linux. Its creator, NCSA, is the same group that developed Mosaic.Httpd will consume about 200K RAM while running. The program itself uses very little disk space. Just allocate enough disk space to hold all the pages of Web information.

The burden of the Web on a Linux server was tested by creating a working Web site on a i386-33 Linux machine. During a 40 day span, the PSU Linux WWW received 5375 requests from 1328 different sites around the world. This is an average of 6.9 requests per hour and 165 per day. The Web requests never interfered with any work being done by other users at the machine. A Linux system can easily provide Web services and have horsepower to spare.

To get httpd and set up your own Web server, look for httpd in the Linux Software Map. For full documentation on httpd, look at This site has all the documentation needed for installation.

The Web Means Business

The Web has attracted a surprising amount of attention from the commercial world. This is a testament to its effectiveness. No other existing system has the clean design, flexibility and momentum that the Web enjoys. “It’s the killer application of the Internet,” says Eamonn Sullivan of PC Week. “I know everyone says that now, but that doesn’t make it any less true.”

PC Week Labs discovered Linux during the course of setting up a Web server. The result is they continue to use Linux on their server, and Linux won PCX Week’s Product of the Week for April 18, 1994. Press exposure of this sort will inspire new and exciting business applications for Linux.

The number one buzzword in business today is client/server. The Web and Linux fit perfectly into such a system. A documentation solution for a large organization can be quickly and effectively developed with the Web.

For example, a business might be grappling with the documentation requirements of ISO 9000, the process quality standard of the International Standards Organization. Using the Web with Linux servers and clients running Mosaic, documents related to ISO 9000 can be sorted in one location, updated only by the group responsible for that process. But anyone inside the organization can view the documents. The Web is an easy, open solution to a thorny problem.

Exciting Possibilities for the Linux Community

Still, the greatest potential of the Web lies with the Internet-connected Linux community. As Linux continues to prove itself to Internet users, the audience for a Linux Web will grow. By distributing data around the world, enormous amounts of information can be conveniently accessed from any Linux machine. There are already a number of Web servers which have manual pages and info files on-line. Hook into the Web and save space.

The first possibility is for manual pages and info pages. This infrequently accessed information can be viewed via the Web instead of storing copies on every machine. There are already a number of Web servers which have manual pages and info files on-line. Hook into the Web and save space.

Using FTP to access files can be difficult for novice users. A Web browser provides a friendly interface for getting files from FTP sites. Common FTP sites can appear as links on a Web page. Newsgroups, also, can be accessed via the Web. Hypertext links between followups are automatically created.

Subscribing to mailing lists can be automated. Using fill-in forms, users could select links to subscribe and unsubscribe. In the same way, they could register with Linux User Counter.

To better bring the Linux community together, a Web server can be configured so that each user has a home page of information about themselves. A tree of Linux information can be developed, right down to individuals. New Linux users would register themselves, and their home page, with some local server. Every new local server would register itself with a central server. Now the location and interests of each Linux user are easily available.

Lastly, there is potential for information beyond mere man pages. Linux is extraordinary in the quantity and quality of online information available. Because of the contributions of groups like the Linux Documentation Project, information about nearly every aspect of Linux use are available. All of these manuals, information sheets, and FAQs can be made available on the Web.

What makes this possibility so exciting is that each manual can be stored in one place—the author’s home site. So when a document is updated, the author has only one central location the change it. Although the documents would actually be scattered around the world, to the Web user they would all appear on one easy-to-locate Web page. The Web provides an optimal system for both authors and users.

The World Wide Web is still in its infancy. The first half of 1994 saw triple-digit growth in Web traffic. By encompassing older systems of information access, like gopher, the Web guaranteed instant compatibility.

Native Web information is exploding. Through the Internet and through CD-ROM distribution, the Linux community is finding many new and creative uses for this flexible technology. No doubt more and better uses will be forthcoming. It is certain that the phenomenal growth of the Web will continue.

Bernie Thompson ran the PSU Linux WWW during its 3-month life span in early 1994

]]> 0
Throwback: Linux vs. Windows NT and OS/2 Sat, 06 Dec 2014 23:50:27 +0000 Originally Published in Linux Journal Issue #1, March 1994

20 Years ago, 15 years before founding Plugable Technologies, Bernie Thompson wrote an article for the first issue of Linux Journal. Over the next twenty years, LJ published 4 more articles including a 20-year retrospective in 2014. Thanks to Linux Journal’s generous terms allowing authors to keep their copyright, we’re able to republish the full series of articles here. If you have your own memories and opinions of the time, please share in the comments. This is Article 1 of 5.

Picking an operating system is a dangerous business. You’re committing yourself to a couple hours, certainly, or maybe a couple days of manual-reading, file-editing, and hassles. If your real goal was just to get some work done, maybe it would have been simpler to stay with Windows 3.1 and never embark on an adventure in computing.

But, then again, there seems to be a substantial body of computer users who are dissatisfied with DOS and Windows. Some are moving to OS/2, Windows NT, or some other Comdex wonder. Some are even daring enough to wipe out DOS in favor of an anti-establishment system like Linux.

Before you take the plunge, you should know up front what you stand to gain. More importantly, too, what you stand to lose. Here’s what lies ahead for you if you want OS/2, Windows NT, or Linux to be part of your future.

Hardware is the First Issue

Don’t even think about switching systems until you know what your hardware supports. The wonderful features of a new system won’t be compelling if your system doesn’t work.

You must have a Intel 386 or better to have any 32-bit choices. Then you need memory. Linux needs 2MB RAM to try out, OS/2 needs 4MB, and NT needs 12 MB. And you need disk space. You need to set aside at least 15MB for Linux, 32MB for OS/2, and 70MB for NT for a good trial run. A full working system will require even more resources.

Memory required

If these requirements are satisfied, you still have to determine if all the pieces of your machine are compatible. If your machine uses the Microchannel bus (all IBM PS/2s), Linux doesn’t support you. If you have a Compaq QVision video board, OS/2 won’t use it. If you have a network card with a 3Com 3c501 chip, NT can’t talk to it. And these are just samples of some possible compatibility problems. The full list changes often. Incompatibilities constantly recede as better hardware support is added. But a constant stream of new, incompatible hardware is always hitting the market.

Why are computer users put through this ringer? Well, The PC hardware market has few solid standards. IBM-compatible hasn’t really meant anything since IBM stopped leading the industry. These nightmares can be avoided by getting your 32-bit operating system the same way you got DOS and Windows—buy a complete computer system with Linux, OS/2 or NT pre-installed. Companies which do this are rare, but you’ll save trouble by seeking one out. Let them find the best hardware to fit the operating system you want.

If buying a whole new system isn’t an option, you’ll have to take the path most Linux, OS/2, and NT users have taken. Just start installing. If you have trouble, be prepared to find out more than you ever wanted to know about the pieces of your system.

Why Operating Systems Matter

Operating systems determine which applications will work, what those applications will look like, and how they will work together.

For example, if you want to run Microsoft’s application suite (Word, Excel, Access, and PowerPoint), you’re out of luck with Linux. They won’t work. With OS/2, they work for now, but the burden is on IBM to keep up since Microsoft abandoned OS/2 in 1991. In the end, Windows 3.1 and Windows NT are the only safe choices for using Microsoft applications.

How applications look and how they work together are determined by the operating system, too.

Disk Space

Windows NT uses the same program manager—file manager—print manager interface as Windows 3.1. This interface is not elegant, but it has one very significant advantage—it is simple. And because it’s not very configurable, users can’t do much damage by moving icons around and changing settings.

OS/2 takes the more radical route of a completely object oriented interface. Data and programs are objects which can be arranged in any manner. Clicking on a data object starts the associated application. Dragging data to the printer object prints it. Although OS/2 has a notoriously bland color scheme and layout when first installed, every detail can be re-configured.

With OS/2’s flexibility comes a daunting depth of detail for first-time users. It is too easy to get lost. With dozens of windows open, it’s a pain to locate and manipulate things. However, these disadvantages fade when the system is used for a while. The detail, power, and regularity of the interface become persuasive.

Linux uses the X/Windows system. X/Windows is a graphical chameleon, able to look and act many ways. The advantage is flexibility and choice. The disadvantage is complexity. Applications may not look and act alike. Many different interfaces are available. This makes user instruction and support more difficult.

Linux is primarily a command-line system where programs are typed in by name, although program managers and file managers are available to ease the transition of a novice user. The same tasks done with Windows and OS/2 are possible under Linux, but they generally require more knowledge and skills. If one knowledgeable user configures the Linux system, most novice users will be comfortable starting and running applications.

Features Provided

All three systems have a wide variety of books and tutorials available which can help novice users. Although Linux is a free system, it still has a library of books written about it—any book about Unix will apply to Linux. So finding assistance on the use of these systems should not be difficult.

If the issues of interface are surmountable, Linux has many positive characteristics that are not shared by OS/2 and NT. Linux enjoys the advantage of having no guarded secrets, no technology owned by a single company. The source code is freely available, which means it can be inspected and improved upon by any corporate or individual user. And, surprisingly, this common knowledge was used to build a system which is more miserly with memory and disk space than either OS/2 or NT. IBM and Microsoft would actually have much to learn from Linux if they cared to look.

The Foundations

When it comes down to it, an operating system is just a foundation. Choose the foundation that supports the features you need and will need in the future. But be aware of the high price in memory, storage, and performance that these features exact.

Linux, like OS/2, is designed and optimized to run on Intel 386 and compatible CPUs. By contrast, Windows NT is designed to be ported to many different CPUs. NT is currently available for MIPS, DEC Alpha, and Intel 386. This independence from Intel is an important advantage for NT, because users have more hardware choices.

All three systems support multitasking, which is the ability to have many programs running simultaneously. For example, it is possible to format a disk, download a file from a BBS, and edit in a word processor, all simultaneously. You can’t do this using a system like MS DOS, which doesn’t support multitasking.

NT supports multiprocessing, which means using more than one CPU in a single machine. An NT PC could have 2 or more processors, all working together. Again, this means more hardware possibilities for the NT user.

NT and Linux both support dynamic caching. Caching stores recently used information in memory, so it is readily available if needed again. OS/2 sets aside a pre-determined chunk of memory to do this (typically 512K to 2MB), whereas Linux and NT will dynamically use as much spare memory as possible. The result is much faster disk access for Linux and NT, because the information is often already in the cache. OS/2’s inflexibility causes memory to be wasted when not used, and memory to be used poorly when it is scarce.

Linux, unlike OS/2 and NT, has full multiuser support. Local users, modem users, and network users can all simultaneously run text and graphics programs. This is a powerful feature for business environments that is unmatched by OS/2 or NT.

Linux has security systems to prevent normal users from misconfiguring the system. Although Windows NT isn’t multiuser, it has security checks for the individual using the machine. It is safe to have a Linux or NT machine available for use by many people, whereas an OS/2 user could (mis)configure the system software.

Applications Supported

Linux’s security and multiuser features are so well developed because they are traditional features for Unix. Since Linux is “Unix-compatible,” it supports these same powerful features.

The Costs

Every feature supported will tend to make an operating system larger, consuming more memory and storage. Larger systems are also slower than smaller systems when memory is scarce. So the size of a system is an important issue.

NT is the largest of the three systems. NT’s support for portability, multiprocessing, and many other features is the cause of its large size. Given a powerful enough machine, NT offers a set of features that is very compelling.

Linux with X/Windows is the next smaller system. Linux itself is very miserly, but X/Windows puts a burden on the system. For most the graphical interface will be worth the cost in resources.

OS/2 is smallest of the three when using a graphical interface. This is the attraction of OS/2. A user need only upgrade to 8MB of RAM to use an object-oriented interface and have a good platform for multitasking DOS, Windows, and OS/2 programs. OS/2 is the strongest of the three for backward compatibility with DOS and Windows. OS/2 has sold several million copies in the last two years, primarily because of these strengths.

Linux without X/Windows is the smallest of the three. This is a great sacrifice for many, running without graphical windows. But by jettisoning expensive graphics, the system is smaller and faster than OS/2 or NT will ever be. 4MB RAM, the standard configuration for a DOS/Windows PC, is plenty for most tasks. So Linux can make good use of a low-end 386 PC with little memory, where OS/2 or NT either would not run, or not run well. Systems with lots of memory will be able to use Linux’s dynamic caching to achieve unusually high performance. With 16MB RAM, almost 12MB remains to be used for caching and running applications.


In general, the issue of size is a great strength for Linux. Linux was designed to be as small and efficient as possible. NT’s most important criterion was portability, and OS/2’s was backward compatibility. The result is Linux is the most efficient of the three. And because a company or individual has access to the Linux code, it can be optimized and scaled to suit the hardware and needs of the user. OS/2 and NT do not have this flexibility.

The Practical Results

Windows NT is compelling because it is a solid system that offers freedom from the single CPU Intel world.

OS/2 is compelling because it offers the best system for running 16-bit DOS and Windows applications while moving into the more flexible and powerful 32-bit world.

But both systems still end up locking users into proprietary technology—applications that will only work on either OS/2 or NT. Linux does not pose this danger. Applications written for Linux can be ported to any of the dozens of other Unix systems available. Betting on an “open” technology from IBM or Microsoft is still a risky game. Linux offers freedom from this kind of entrapment.

The greatest difficulty in realizing this freedom is finding high quality applications. To keep from getting locked into a proprietary system, you have to choose applications with support for multiple platforms. If your spreadsheet supports Windows, OS/2, Unix, and Mac, you can be confident that support for additional platforms would also be possible. The trade-off is fewer features and higher prices.

Linux has an interface to run commercial applications designed for other Intel Unix systems like SCO Unix. But the quality of applications is still a problem. For example, there is no commercial word processor for Linux which matches the quality of ones for Windows and OS/2. This kind of glaring inadequacy alone can preclude the use of Linux.

Which System to Use

For the corporate user, Linux will fit in well with a TCP/IP based client-server strategy. Linux can turn low-end hardware into a solid fileserver or PostScript print server. Linux works better than many commercial Unix systems on common Intel hardware. Linux is small and fast. Linux can be completely inspected and customized by anyone. Linux has built-in mail and internet tools. Phone support and documentation for Linux are available.

But there are three disadvantages. One, there are few commercial applications. Two, if something goes wrong, there is no one organization to blame as with OS/2 or NT. Three, Linux’s foundations are strong, but Microsoft and IBM are constantly developing new technologies that may leave Linux behind. In general, Linux has the features to make it a better choice than NT or OS/2 in some situations. As Linux gains exposure, more businesses are likely to take advantage of this potential.

For the technical user, Linux offers the exciting chance to tinker with an operating system. All of the system’s source code is available. It is a great learning tool and motivator. And since most current Linux users are technical hobbyists, a wealth of applications are available to suit these tastes. Ray tracers, morphing programs, graphics viewers, compilers, games, and more are all available. Linux does lack full-motion video, speech recognition, and some other cutting-edge technologies. These features, along with OS/2 and NT application development, may be compelling enough to draw the technical user towards OS/2 or NT.

For the novice user, OS/2 or NT is the best 32-bit option. OS/2’s object-oriented interface and free technical support are compelling factors. NT’s power to sway commercial developers is reassuring. But the safest and most likely choice for the novice user is to stick with the operating system that came with their computer, typically DOS and Windows 3.1. Tackling installation, configuration, and new applications is still not trivial for these three 32-bit systems.

Overall, Linux stacks up surprisingly well for a free system developed by a horde of volunteer programmers. It’s foundations are solid. The quantity and quality of many free applications are stunning. If Windows-class applications and an OS/2-class interface are developed for Linux, it will have the compelling features to tackle commercial systems. While many computer users now know only OS/2 and NT, thousands of others have discovered Linux. As all three of these systems quickly improve and evolve, Linux is likely to gain an expanding base of users. Free software has a powerful new platform to build on.

Bernie Thompson was a member of IBM’s development team for OS/2 2.0 and 2.1

]]> 0
How to manually update your Windows Intel HD Graphics drivers Thu, 04 Dec 2014 03:51:11 +0000 Fixing graphics driver issues sometimes requires a newer driver than is available from the system manufacturer or Windows Update. In these cases you might need the latest driver direct from the graphics chip maker (usually Intel, AMD, or Nvidia).

This post details the necessary steps to manually update your Intel HD Graphics drivers using Intel reference drivers on Windows 8/8.1 systems. There is also a video of the process embedded below that might be helpful to watch prior to performing the installation steps.

Please note that this process will not work on all systems. The installation process and drivers are just for Intel graphics and have been tested repeatedly on the Microsoft Surface series of tablets with positive results, but only minimally tested on other systems. When in doubt, contact your system manufacturer directly for guidance on driver updates.

Driver Installation Steps

  1. Download the (version driver package
  2. Locate the downloaded file, right-click it, and extract the contents to a folder of your choosing. Take note of this location. You’ll need it below
  3. Right-click on the Start Menu/Windows logo and select “Device Manager”
  4. In Device Manager, expand Display Adapters category and then double-click on the entry that appears below it – the Intel HD Graphics Family2014-12-03-Manual-Update-Intel-Drivers-Device-Manger
  5. Click the Driver Tab -> Update Driver -> Browse my computer for driver software2014-12-03-Manual-Update-Intel-Drivers-Manual
  6. Let me pick from a list of device drivers on my computer -> Have disk
  7. “Browse” and browse to the folder where you extracted the download above, followed by the “Graphics” sub-folder -> Click to highlight the file titled “kit64ics.inf” and then click “Open”, followed by “OK” and “Next”
  8. Upon completion of the installation, reboot even if not prompted to. On reboot, you’re done.

Driver Installation Video

Click here for a full screen tab.

We hope this background helps. Any questions? Feel free to comment below. And if you have a Plugable product, just email, we’ll be happy to help. Thank you!

]]> 29
Plugable Launches the Pro8 Docking Station (UD-PRO8) for Tablets like the Dell Venue 8 Pro Thu, 20 Nov 2014 23:29:22 +0000 Update: November 25th, 2014 – We’ve confirmed the HP Stream 7 is compatible with the Pro8 and we have had reports from customers that the HP Stream 8, Toshiba Encore Mini along with the ASUS VivoTab Smart 10.1 may be compatible as well. For a list of compatible tablets click here.

One year ago we demonstrated our UD-3900 USB 3.0 universal docking station with the newly released Dell Venue 8 Pro 8″ Windows tablet on YouTube. Our video has been a huge success – to date we’ve had over 225000 views, nearly 1000 likes, almost 500 comments, and got retweeted by Michael Dell. The overwhelmingly positive response from our audience prompted us to take action to try to tackle the unsolved problem of simultaneously charging and using a USB docking station over a the one available micro-b USB port.

Those of you who have been following this project are likely aware of the Kickstarter we launched in June and successfully funded in July. With the help of crowd funding we were able to bring the UD-PRO8 to life.


The Pro8 is the world’s first all-in-one docking solution designed to charge the Dell Venue 8 Pro, Nextbook 8 (Win 8.1), and Lenovo Miix 2 8″ tablets while simultaneously connecting external USB devices. The Pro8 unlocks an amazing amount of potential on compatible tablets like these. Microsoft heavily discounts or gives away full versions of Windows 8.1 only with 8″ or smaller tablets, with the theory that such a small screen and few ports is limiting enough that these systems won’t compete with full Windows PCs. Our Pro8 docking station is a game-changing device that allows these small 8″ tablets to function as desktop replacements with the multi-monitor, network, audio, and USB connectivity options of a full desktop PC.

We have completed the first wave of shipping Pro8 docks to our Kickstarter backers and are excited to finally announce availability on Amazon US for $89 / $99 for CA. The timing couldn’t have worked out better with the holiday season fast approaching. Those with friends and family looking for an inexpensive tablet may want to consider combining our Pro8 with the newly released Nextbook 8 due to it’s aggressive pricing of $149 exclusively from Walmart with Windows 8.1 and a one year free subscription to Office 365 Personal. Through December 31st they are also including a 16GB Micro SD card for free (ours shipped with a Kingston class 4).

Better yet, beginning on Thursday November 27th at 8PM (local time) they are offering the tablet for just $99 as an early Black Friday deal. Combined with our Pro8 docking station, for about $200 you can have a good entry level tablet & desktop replacement.

We would like to point out however that the Nextbook 8 is on a lower performance tier than the Venue 8 Pro and Lenovo Miix 2 8″ with only 1GB on RAM and a 16GB SSD, but unlike the Venue or Miix, there is a built in mini HDMI port which is a nice addition.

For those already using and excited about the Pro8, we’d encourage you to spread the word on twitter with any pics and another info you want to share, using the hash tag #pro8

And if you have any questions or problems at all, let us know.  Our entire Plugable Technologies team is located in Seattle, WA and we’re here to help! If needed, contact our support team here.

Where to Buy

]]> 28
How to switch to USB Audio on Raspberry Pi (Model B/Raspbian September 2014) Thu, 06 Nov 2014 22:17:06 +0000 Serious beats for the Pi!

Serious beats for the Pi!

After publishing our updated Raspberry Pi hardware list, some elaboration is needed on how to use the USB-AUDIO adapter as your default playback device.

This little how-to, should enable you to use the USB-AUDIO as default playback device.

  • To get started, plug your USB-AUDIO into the Pi and run the following command:
    aplay -l

    Within the output you should find: card 0: Device [USB Audio Device], device 0: USB Audio [USB Audio], which means the Pi has recognized the USB-AUDIO adapter and we can move on to configuration. If this is not the case, some further troubleshooting is needed (try power cycling your USB hub or plug the audio adapter directly into the Pi and alternatively use the lsusb command).

  • Use your favorite editor to modify /etc/modprobe.d/alsa-base.conf (be sure to make a backup of this file before editing in case something goes wrong!)
  • Change:
    options snd-usb-audio index=-2


    options snd-usb-audio index=0

    and also add the following on the next line:

    options snd_bcm2835 index=1

    This is essentially forcing the default sound module (snd_bcm2835) to be disabled while the usb sound module (snd-usb-audio) is enabled; rearranging the hierarchy of the sound modules.

  • Reboot and test for audio output

As the distros get updated and change over the months, this tutorial might not be an exact 1 : 1 representation of what your .conf may look like, or how the audio adapter enumerates. The main point here is to set the audio adapter to 0. The .conf itself has the following comment: “Keep usb-audio from being loaded as first soundcard”. This clue is what you are looking for to set to 0.

]]> 16
OS X 10.10 “Yosemite” Ethernet Adapter Problems? We can help! Sat, 01 Nov 2014 00:32:27 +0000 10.10 Yosemite, non working Ethenet adpater

There has been a lot of buzz around upgrading to 10.10 and afterward having network related problems. This post will focus on our USB3-HUB3ME, USB3-E1000, USB2-E1000 and USB2-E100 Ethernet adapters, but we encourage you to apply the concept of this content to troubleshoot other brands or similar network related issues.

First, it is a good idea to check if you have any possibility of connecting to your network via WiFi. If you cannot connect to any network via WiFi or Ethernet adapter, you might want to carefully consider this Apple forum thread which addresses this problem. If this is not your problem and is isolated to your Ethernet adapter only, the latter set of instructions is for you.

For OSX/BSD/Unix/Linux it is best practice to remove non core kernel modules/drivers/extensions before performing a major upgrade and to reinstall the latest revision after this has been accomplished. We will take a similar approach to fix this issue.

Again, these instructions are for a seemingly non working Ethernet adapter (USB3-HUB3ME, USB3-E1000, USB2-E1000, USB2-E100) after upgrading from 10.9 “Mavericks” to 10.10 “Yosemite”.

  1. Disconnect Ethernet adapter
  2. Take a look at “System Information” > “Software” > “Extensions” and look for an instance labeled “AX88179_798A” (for the USB3-E1000 and USB3-HUB3ME), “AX88178″ (for the USB2-E1000) or “AX88772″ (for the USB2-E100) and select it for you to be able to look at the “Location” path (as an example, the AX88179_79A instance has the following path: /Library/Extensions/AX88179_178A.kext)
  3. Open your Terminal and run the following command:
    sudo kextunload /pathof/thextension/NAME_OF_THE_KEXT_FILE.kext

    (note this is the path shown in system information in step 2). Now you have unloaded this extension.

  4. Reboot
  5. Download the newest driver from here and install
  6. Connect the Ethernet adapter and test
]]> 0
Raspberry Pi (Model B) and Plugable Devices Updated for Winter 2014 Thu, 23 Oct 2014 21:39:54 +0000 Plugable Pi mhmmmm!

Since our last post on which devices work best on the Raspberry Pi, we have had some new additions to the Plugable product line up. This post will include new products as well as the proven ones to have all information on one page.

All tests were carried out on a Raspberry Pi Model B using the latest version of Raspbian Wheezy (September 2014 release).

USB Hubs

  • USB2-HUB7BC – No issues
  • USB2-HUB10C2 – Causes the Raspberry Pi to reboot upon connection, because it supplements the 2.5A wall power with 500mA from the upstream port. This is too much for the Pi right at that moment when it is plugged in. If you plug the 10 port hub in when the Pi is powered down, you can boot into the Pi and all will be well. But since there are better options, we do not recommend our 10 port hub with the Pi.
  • USB2-HUB-AG7 – No issues
  • USB2-HUB4BC – No issues
  • USB2-HUB10S – Causes the Raspberry Pi to reboot upon connection, because it supplements the 2.5A wall power with 500mA from the upstream port. This is too much for the Pi right at that moment when it is plugged in. If you plug the 10 port hub in when the Pi is powered down, you can boot into the Pi and all will be well. But since there are better options, we do not recommend our 10 port hub with the Pi.
  • USB2-2PORT – Causes the Raspberry Pi to reboot upon connection. This is simply because this is an unpowered hub. Only hubs with their own power adapter should be used with the Pi.
  • USB2-SWITCH2 – No issues
  • USB3-HUB10C2 – Produced inconsistent results. Performs best if powered by one of the flip-up ports. Pi reboots upon connecting this Hub to the USB ports. We really do not recommend using this USB hub.
  • USB3-HUB3ME – Causes the Raspberry Pi to reboot upon connection. Plugging the USB hub into the Pi while powered down is advised. USB HID devices (Mice, Keyboards) are known not to work with this hub on the Raspberry Pi.
  • USB3-HUB4M – Causes the Raspberry Pi to reboot upon connection. Plugging the USB hub into the Pi while powered down is advised. USB HID devices (Mice, Keyboards) are known not to work with this hub on the Raspberry Pi.
  • USB3-HUB7-81x – USB HID devices (Mice, Keyboards) are known not to work with this hub on the Raspberry Pi.
  • USB3-HUB81x4 – USB HID devices (Mice, Keyboards) are known not to work with this hub on the Raspberry Pi.
  • USB3-SWITCH2 – No issues

Other Devices

The common pattern with all devices is you must have one of the powered usb hubs above and connect the device through that. If you don’t, the Pi won’t be able to handle the power draw, and it will drop voltage and reset.



  • USB2-E1000 – Driver already in kernel. Works automatically when connected through a powered USB Hub.
  • USB2-E100 – Driver already in kernel. Works automatically when connected through a powered USB Hub.
  • USB3-E1000 – Driver already in kernel. Works automatically when connected through a powered USB Hub.



  • USB3-SATA-U3 – Driver already in kernel. Because it has its own 12V 2A AC adapter, it works automatically even when directly connected to the Pi.*Important note:* September’s Raspberry Pi release runs on Kernel version 3.12.28. There is a known bug with Kernel versions 3.15 and 3.16 in combination with this hard drive docking station. Full functionality resumed in 3.17
  • USB3-SATA-UASP1 – No issues
  • USB2-CARDRAM3 – Driver already in kernel. Works automatically when connected through a powered USB Hub.


  • USB2-MICRO-200X – We have test our microscope connected through a powered USB hub to work with GTK+ UVC Viewer by using the following terminal commands:“sudo apt-get install guvcview” + “guvcview”
]]> 1
To 10.10 Yosemite, or not to 10.10 Yosemite, that is the Question: Wed, 22 Oct 2014 21:23:12 +0000 OS X Yosemite 10.10

There certainly has been a lot of speculation on how Mac OS X 10.10 Yosemite will change the world and create a even better customer experience, but what does this mean for USB devices and more specifically, USB devices with drivers?

With the bitter/sweet aftertaste of the Mavericks upgrade, Mac customers yet again are worrying about hardware compatibility. Since the leap from 10.8 Mountain Lion to 10.9 Mavericks seemed to be large one, will there be the same anguish of losing full functionality of the current accessory-hardware?

Although we are still in the infant stages of the 10.10 release, and these problems encompass not just Plugable products, we still would like to share a few data points on what was experienced so far.

Ethernet Adapters

There is 10.10 driver support for the USB3-E1000, USB2-E1000, USB2-E100 and the USB3-HUB3ME. If customers loose functionality after upgrading, a re-installation of the driver will help to get back up and running again. This entails removing the current driver (removal script is included in the driver folder), and removing its instance from the “Network” lineup in “System Preferences”. If you have a “Bluetooth PAN” entry or have connected via Bluetooth to the internet before, removing this entry will help simplify your troubleshooting process.

We have also seen other underlying issues that contribute to the symptoms of a non-working Ethernet adapter (see this current Apple Forum thread). At this point there are either file permission issues or the operating system fails to launch network related daemons/services. We recommend running the “Repair Disk Permissions” utility that you can find within your “Disk Utility” on your Macintosh HD to accomplish this. For a detailed tutorial please see this blog post.

DisplayLink USB Graphics Adapters and Docking Stations

The primary challenges relating to USB-graphics that were present in Mavericks (10.9.x) are still problematic in Yosemite. The following are a few key issues can be especially problematic (though the severity and frequency of these issues can vary from system to system):

  • Second connected DisplayLink display may not display an image.
  • Display arrangement does not persist after rebooting when using two or more DisplayLink displays.
  • Some users experience higher than expected CPU usage when a DisplayLink display is connected.
  • Users can encounter intermittent spontaneous instances of being logged out of their account. (This is caused by Apple’s WindowServer process crashing. We’ve documented a fix that helps this behavior in Mavericks here but have not yet successfully reproduced this issue in 10.10 to see if the same fix is successful.)

There is a more comprehensive list of known issues relating to Yosemite on DisplayLink’s Knowledgebase. DisplayLink is still attempting to engage with Apple regarding these issues, and we’ll continue to post updates as they become available.

One positive development of note is that DisplayLink has recently released an updated beta driver (v2.3) for OS X 10.8.5, 10.9.x, and 10.10. It contains some minor bug fixes and adds support for 4K resolutions using DL-5×00 based display adapters such as our UGA-4KDP.

USB Hubs

USB 3.0 hub support on Yosemite has not had any noticeable changes. Overall, users should expect a hassle free experience with any of our VL81x series chipset hubs with all of their devices. One notable exception is some external hard drives may need a firmware update installed from the drive manufacturer. This is nothing new, many Mac users have needed to update their external hard drive firmware for stable operation in older OS 10.x releases, but we always recommend to check if you run into any issues.

Most drives will not have this issue, but we do see it happen on occasion in edge case scenarios. Symptoms to look out for are drives failing to resume from sleep properly resulting in the drive not being ejected (unmounted) properly. Because of this data corruption can occur, and we recommend that if the drive is being used for Time Machine backups, to make sure the external hard drive directly connected to the Mac.

]]> 6
What BadUSB Is and Isn’t Tue, 07 Oct 2014 00:25:27 +0000 The BadUSB exploit is an idea and working proof of concept which takes advantage of the fact that some USB devices have firmware, and on some of those devices the firmware can be updated.

BadUSB has exploded onto the press in the last few days with articles like Wired – The Unpatchable Malware That Infects USBs Is Now on the Loose, CNBC – Why USB malware just became a big problem , The Verge – This published hack could be the beginning of the end for USB.

2014-10-06-Phison-Flash-Storage-ControllerThis first wave of articles have a few problems, as you might guess. As a former Development Manager of the USB team at Microsoft and the founder of a USB device maker (Plugable Technologies), I hope to fill in a few more of the pieces.

First off, this is a real family of security issues. Anywhere there’s running code, there’s opportunity for exploit. In the Internet of Things era, there is code running nearly everywhere. As electronics shrink, things we think of as “devices” are really computers. To deal with an evolving world, we often want these little devices to be software fixable and upgradable. This creates risks that need to be actively mitigated.

To hack a computer with a USB device, at least 2 things have to be true:

  1. The USB device being infected needs to have firmware, that firmware needs to be software upgradable, and that upgrade mechanism needs to be insecure. That is true of some USB devices but not others.
  2. If a USB device is vulnerable, the virus has to be designed for particular USB controller(s) in that device. The method of flashing firmware on the device and the instruction set is controller specific. The BadUSB code out now is specific to one USB flash controller (Phison) and won’t affect other USB devices. It is not a universal attack.

Whether #1 and #2 are true depends on the particular device. Take our Plugable USB product line as an example: none are exploitable with the BadUSB code as it stands right now because we don’t use the Phison controller. However, some would be vulnerable if specific attacks were targeted at the specific controllers in the devices.

main_512For example, the Termius Technology Chipset used in all of our Plugable-brand USB 2.0 hubs is a fixed-function hardware ASIC without executable or updatable firmware. These USB devices are not vulnerable to BadUSB-style attacks of any kind.

On the other side, our USB 3.0 SATA drive docks use the ASMedia 1051E and 1053E chipsets, which have an 8-bit microcontroller. It is firmware upgradable. So while the recently released BadUSB code will not infect these docks, in theory they could be targeted in the future with a similar effort to that which went into BadUSB.

An interesting 3rd example is our Plugable USB 3.0 Tablet / Laptop Docking Stations and Graphics Adapters. These use DisplayLink DL-3×00 and DL-5×00 chipsets. They make use of firmware. That firmware is software upgradable. However, DisplayLink has implemented on-chip authentication, encryption, and firmware validation which makes it quite difficult for any 3rd party to successfully update firmware. To date, no 3rd party has successfully been able to crack this and talk to the DisplayLink chip. That is one of the reasons why these products work only with Windows and Mac where DisplayLink provides drivers themselves. No software-based security is invulnerable. But it can be a strong mitigation.

You can find out which USB controllers are used in our products on the product pages at Plugable and on Newegg or Amazon listings, etc. We do that because chipset is the best way to dig into compatibility details, but it’s also the best way to research what security features the chips have. We’ll be working to expand on our security information and features over time.

Hopefully some of this detail helps create a fuller picture of what BadUSB is and isn’t. You can also get a lot of great detail from Brandon Wilson and Adam Caudill’s video of how BadUSB was created. If you have any questions, we’re happy to share what we know, just comment below.

Bernie Thompson
Founder, Plugable Technologies

]]> 0