The thing I’m sharing today is a very early and very small slice of the project that is likely going to consume the next few years of my life. This is a parametric generator for M.2 expansion card mockups in OpenSCAD.
When looking into small form factor cases to build a Mini-ITX PC for my 国内苹果手机怎样使用油管, I found a few things:
Like any other hobby, there is an obsessive (in a good way) community of small form factor enthusiasts.
The metric they optimize for is case size in liters.
Often, people are stuck sub-optimally limiting their component selection to the case they want or their case selection to fit the components they have.
Rather than limiting choice or ending up with a larger than desired case, why not make your case exactly match the size of the components you want with no wasted space? It turns out that a Mini-ITX motherboard, SFX power supply, and short GPU just barely fit within the bounds of a Prusa i3 MK3 3D printer, so I decided to solve exactly that with an open source fully parametric printable case in OpenSCAD. That means you can input the components you have or edit a few dimensions and output a bespoke case that fits them perfectly. To win community brownie points, the volume of the case is also automatically generated and embossed on the side.
Partly for rigidity and partly for simplicity of design and assembly, I decided to make it effectively a bucket with most of the case being a single print. I started with a traditional “shoebox” layout to keep it simple as well. The only other parts are the lid and optional feet (printable in flexible material like TPU). I also used threaded inserts rather than screwing into plastic to allow re-assembly without destroying the case.
I referenced the Mini-ITX and PCI-e specs to get the proper dimensions, and measured the components I had on hand and pulled some datasheets online for specifics on heatsinks and the GPU. There is pretty good ventilation all around, with the default configuration that fits my components having a 140mm intake fan and a mostly isolated GPU with dedicated intake and exhaust.
It took me three or four iterations of prints (~36 hours and ~400g/$8 of plastic each) to get to a level of completeness that I’m happy with using and publishing, but there is certainly more to improve. Since it is open source, revisions and fixes are welcome.
I tried to make it as simple as possible to customize by having keyword fields for the power supply type and heatsink chosen. The PSU can be SFX, SFX-L, or FlexATX, and heatsink can be a 120mm AIO, Noctua NH-L12s, Noctua NH-U9s, or Cryorig C7. If you have any of those and the same GPU I have (Zotac 1080 Mini), you can just edit the keywords and the case will be automatically generated to fit them. If you want to make deeper changes or use different components, you can do so by editing the .scad files.
The full CAD, example ready-to-print .stl files, and instructions are up on GitHub, licensed under an Open Source 2-Clause BSD license. You can also follow along the development thread at SFF Forum.
7 Comments on Fully Parametric 3D Printable Computer Case
It’s remarkable how much and how little has changed with RepRap since I 苹果国内怎么上油管 in late 2010. The basic architecture has proven incredibly robust. The most popular home 3D printers including the Prusa i3 MK3 that I bought still use an open frame with a moving bed on a belt for Y, moving extruder on a belt for X, dual driven lead screws for Z, gear driven filament into a hot end with a heat break and heat block, a 0.4mm nozzle, and an ATmega2560 for control. I suspect if I dug into the firmware, I’d even find some source in common between the Prusa firmware on the MK3 and Sprinter firmware I used on the Mendel.
That may sound like criticism, but I actually mean it as praise. Over the last 8 years, there have been hundreds of diverging and converging iterations on the Mendel formula enabled by its open source nature, with each fixing flaws and adding improvements over the last. It took me about two months of research to get the right parts and another two months of building and tuning to get my old Mendel to print anything at all, and it took a stack of hacks and modifications of mechanical design, circuitry, firmware, and host software that meant I was probably the only person who could speak the incantations required to operate the thing. With the MK3, it took 4-5 hours of assembly (by choice; you can order it pre-assembled) and absolutely no configuration to get to a perfect first print, and there are thousands of people with the same configuration.
The printer isn’t perfect, but again open source comes to the rescue. I had taken a few months hiatus using a Monoprice Mini Delta 3D Printer, and while it was a nice tool, it had a range of bugs and irritating flaws that were challenging or impossible to correct. With the Prusa, I found that I needed a light to provide illumination for the webcam attached to the OctoPrint Raspberry Pi driving it. I was able to pull up the schematic and rig up an LED strip trivially. I’ve posted up the CAD and instructions on Thingiverse so anyone else with an MK3 or derived printer can do it too!
1 Comment on Revisiting RepRap 8 years later with a Prusa i3 MK3
When building projects professionally, I try to take every shortcut possible to accelerate learnings around an idea and get useful results to inform the next iteration. When building projects personally, I do basically the opposite. Often when starting with an idea, I’ll find that it would be helpful to build a tool to execute on the idea cleanly, so I switch tracks to building the tool. Sometimes, when trying to build that tool, I’ll find that I’m missing some other tool and build that instead. This is one of those.
I couldn’t find an adjustable power supply in my house (I think my personal one became property of Oculus at some point), and I couldn’t find a small simple one that I liked online, so I built the one I wanted. This is just a 3d printed housing with a PD Buddy USB Type C board, a Rui Deng DPS5005 Switching Power Supply, and banana plug terminals inside. It supports around 0-19V output from a 20V Type C power supply like a MacBook Pro charger and up to 5A. Rui Deng’s DPS3005 would technically also be sufficient if you want to save a few bucks. The result is a cute little adjustable desktop power supply that solves for what most of my projects need. The OpenSCAD and STL files along with assembly instructions are on Thingiverse.
1 Comment on Tiny USB Type C Adjustable Power Supply
A few month hiatus from this blog turned into five and a half years, but that is a much longer story. This one is about the state of desktop spherical displays in 2018. In 2011, I hacked together the Snow Globe spherical display from a laser pico-projector, an off the shelf fish-eye lens, a bathroom light fixture, and some shader code. I had hoped to make it easy for folks to build their own version by publishing everything, but the lens ended up being unobtanium. Judging by the comments on the post, nobody was able to properly replicate the build.
A few months ago Palmer Luckey gave me heads up that a company called Gakken in Japan had a consumer version of the idea and that like everything in the world, there were sellers on eBay and Amazon importing it into the US. The Gakken Worldeye sounded like it could fulfill the dream of a desktop spherical display, so I bought one to use and another to tear down. It ended up being a hemispherical display with a pretty decent projection surface but a terrible projector and even worse driving electronics. The guts of the sphere are above. There is a VGA resolution TI DLP that is cropped by the lens to a 480 pixel circle. The Worldeye takes 720p input over HDMI, which is then downsampled and squashed horizontally to that circle by an MStar video bridge. Between the poor projector resolution and the questionable resampling, the results look extremely blurry.
I figured it would be possible to improve on the sphere by taking advantage of the display surface and lens and swapping out the projector and electronics. In the time since the ShowWX used in the Snow Globe was released, Microvision has developed higher resolution laser scanning projector modules in conjunction with Sony and others. I picked up a Sony MP-CL1 with one of these modules, which is natively 1280×720. This should have been a decent improvement over the 848×480 in the ShowWX. I then CAD’d up and 3d printed a holder to mount it along with the original Worldeye lens into the globe.
The results are a bit underwhelming. The image looks better than the stock Worldeye, but still looks quite blurry. I realized afterwards that the sphere diameter is too small to take advantage of the projector resolution. At around a 5″ radius, the surface of the sphere is getting around 1.8 pixels per mm (assuming uniform distortion). The laser beam coming out of the projector is well over 1 mm wide, and probably closer to 1.5mm. This means that neighboring pixels are blending heavily into each other. The lens MTF is probably also pretty poor, which doesn’t help the sharpness issue. If you’re interested in trying this out anyway, the .scad and .stl files are up on Thingiverse and the code for the Science on a Snow Globe application to display equirectangular images and videos is on GitHub. The conclusion to the opening prompt is that spherical displays are more accessible in 2018 than they were in 2011, but don’t seem to be any better quality. Hopefully someone takes the initiative to solve this.
13 Comments on Snow Globe Redux: Gakken Worldeye Projector Upgrade
This is why you won’t hear from me for months
3 Comments on This is why you won’t hear from me for months
No Comments on This is why you haven’t heard from me in months
Snow Globe and the Adjacent Reality Tracker
I’ll have more detailed posts about the Adjacent Reality Tracker in the future, but here is a preview of some of what you can expect to see at our booth in Fiesta Hall at the Bay Area Maker Faire on May 19th-20th this year. In the meantime, you can follow Donnie and I working on the thing at GitHub.
11 Comments on Snow Globe and the Adjacent Reality Tracker
Blinded by the Light: DIY Retinal Projection
After grabbing a couple of Microvision SHOWWX laser picoprojectors when they went up on Woot a few months back, I started looking for ways to use them. Microvision started out of a project at the University of Washington HITLab in 1994 to develop laser based virtual retinal displays. That is, a display that projects an image directly onto the user’s retina. This allows for a potentially very compact see through display that is only visible by the user. The system they developed reflected lasers off of a mechanical resonant scanner to deflect them vertically and horizontally, placing pixels at the right locations to form an image. The lasers were modulated to vary the brightness of the pixels. The SHOWWX is essentially this setup after 15 years of development to make it inexpensive and miniaturize it to pocket size. The rest of the retinal display system was a set of optics designed to reduce the scanned image down to a point at the user’s pupil. I thought I would try to shrink and cheapen that part of it as well.
The setup I built is basically what Michael Tidwell describes in his Virtual Retinal Displays thesis. The projected image passes through a beamsplitter where some of the light is reflected away, reflects off of a spherical concave mirror to reduce back down to a point, and hits the other side of the beamsplitter, where some of the light passes through and the rest is reflected to the user’s pupil along with light passing through the splitter from the outside world. For the sake of cost savings, all of my mirrors are from the bargain bin of Anchor Optics. The key to the project is picking the right size and focal length of the spherical mirror. The larger setup in the picture below uses a 57mm focal length mirror, which results in a fairly large rig with the laser scanner sitting at twice the focal length (the center of curvature) away from the mirror. The smaller setup has a focal length around 27mm, which results in an image that is too close to focus on unless I take my contact lenses out. The mirror also has to be large enough to cover most of the projected image, which means the radius should be at least ~0.4x the focal length for the 24.3° height and at most ~0.8x for the 43.2° width coming from a SHOWWX. Note that this also puts the field of view of the virtual image entering the eye somewhere between a 24.3° diameter circle and a 24.3° by 43.2° rounded rectangle.
Aside from my inability to find properly shaped mirrors, the big weakness of this rig is the size of the exit pupil. The exit pupil is basically the useful size of the image leaving the system. In this case, it is the width of the point that hits the user’s pupil. If the point is too small, eye movement will cause eye pupil to miss the image entirely. Because the projector is at the center of curvature of the mirror (see 国内苹果手机怎样使用油管), the exit pupil is the same the width as the laser beams coming out of the projector: around 1.5 mm wide. This makes it completely impractical to use head mounted or really, any other way. I paused work on this project a few months ago with the intention of coming back to it when I could think of a way around this. With usable see through consumer head mounted displays just around the bend though, I figured it was time to abandon the project and publish the mistakes I’ve made in case it helps anyone else.
If you do want to build something like this, keep in mind that the title of this post is only half joking. I don’t normally use bold, but this is extra important: If you don’t significantly reduce the intensity of light coming from the projector, you will damage your eyes, possibly permanently. The HITLab system had a maximum laser power output of around 2 μW. The SHOWWX has a maximum of 200mW, which is 100,000x as much! Some folks at the HITLab published a paper on retinal display safety and determined that the maximum permissible exposure from a long term laser display source is around 150 μW, so I needed to reduce the power by at least 10,000x to have a reasonable safety margin. As you can see in the picture above, I glued a ND1024 neutral density filter over the exit of the projector, which reduces the output to 0.1%. Additionally, the beamsplitter I picked reflects away 10% of the light after it exits the projector, and 90% of what bounces off of the concave mirror. Between the ND filter, the beamsplitter, and setting the projector to its lowest brightness setting, the system should be safe to use. The STL file and a fairly ugly parametric OpenSCAD file for the 3D printed rig to hold it all together are below.
15 Comments on Blinded by the Light: DIY Retinal Projection
Reverse Engineering the Lytro .LFP File Format
After getting my Lytro camera yesterday, I set about answering the questions about the light field capture format I had from the last time around. Lytro may be focusing (pun absolutely intended) on the Facebook using crowd with their camera and software, but their file format suggests they don’t mind nerds like us poking around. The file structure is the same as what they use for their compressed web display .lfp files, complete with a plain text table of contents, so I was able to re-use the lfpsplitter tool I wrote earlier with some minor modifications. The README with the tool describes in detail the format of the file and how to parse it.
The table of contents in the raw .lfp files gives away most of the camera’s secrets. It contains a bunch of useful metadata and calibration data like the focal length, sensor temperature, exposure length, and zoom length. It also gives away the fact that the camera contains a 3 axis accelerometer, storing the orientation of the camera with respect to gravity in each image. The physical sensor is 3280 by 3280 pixels, and the raw file just contains a BGGR Bayer array of it at 12 bits per pixel. Saving the array and converting it to tif using the raw2tiff command below shows that each microlens is about 10 pixels in diameter with some vignetting on the edges.
raw2tiff -w3280-l3280-d short IMG_0004_imageRef0.raw output.tif
raw2tiff -w 3280 -l 3280 -d short IMG_0004_imageRef0.raw output.tif
Syncing the camera to Lytro’s desktop software backs it up the first time. Amazingly, the backup file uses the same structure as both .lfp file types. The file contains a huge amount of factory calibration data like an array of hot or stuck pixels and color calibration under different lighting conditions. Incredibly, it also lets loose that there is functioning Wi-Fi on board the camera with files named “C:\\CALIB\\WIFI_PING_RESULT.TXT” and “C:\\CALIB\\WIFI_MAC_ADDR.TXT”, which matches what the FCC teardowns show. There is no mention of Bluetooth support though, despite support by the chipset. In any case, it seems there is a lot of cool stuff coming via firmware updates.
Hopefully one of those updates enables a USB Mass Storage mode, as there does not appear to be any way to get files off of the camera in Linux. I had to borrow my roommate’s MacBook Air for this escapade. The camera shows up as a SCSI CD drive, but mounting /dev/sr0 only shows a placeholder message intended for Windows users.
Thank you for purchasing your Lytro camera. Unfortunately, we do not have a
Windows version of our desktop application at this time. Please check out
http://support.lytro.com for the latest info on Windows support.
It was pretty trivial to write the lfpsplitter to get the raw data shown above, but doing anything useful with it will take more effort. Normally simple stuff like demosiacing the Bayer array will likely be complicated by the need to avoid the gaps between microlenses and not distort the ray direction information. Getting high quality results will probably also require applying the calibration information from the camera backups. A first party light field editing library would be wonderful, but Lytro probably has other priorities.
You can grab my lfpsplitter tool from GitHub at git://github.com/nrpatel/lfptools.git and I uploaded an example .lfp you can use with it if you want to play with light field captures without the $400 hardware commitment.
69 Comments on Reverse Engineering the Lytro .LFP File Format