Friday, May 20, 2011

USB Support

This camera project is going to have USB support, but not the sort of USB that your personal camera has. As discussed earlier, the camera processor module will communicate with the host computer via WiFi, so there is no need for USB there. But I'm not talking about peripheral USB ports anyhow. In fact, I want to hang USB peripherals off the processor module.

Motivations

There are two specific uses I have in mind for USB support. The first is to control filter wheels. There are a variety of store-bought filter wheels available, and most (if not all) can be controlled by USB. These are typically controlled by the host computer, but that host will be some distance away from the telescope, so they really have to connect to the processor module via a USB type-A connector.

I would also like to be able to attach a USB camera to the processor module. Why? Because I would like to build and test the processor board before the sensor board, and it would be very handy to control store-bought USB cameras during testing. I'm thinking specifically of my Nikon-D80 camera, although there are other USB astronomy cameras as well. Also, the former might be great for taking context shots while the main telescope is shooting, and the latter may be useful as an auto-guider. In either case, the extra camera is the USB device and the processor will need another USB type-A connector.

Implementation

So I need two USB ports and both need to at least support being USB hosts. It turns out that the Overo board supports one USB 2.0 host port and one USB OTG port. The OTG port (USB On The Go) should be able to choose to be a host when connected to peripherals, so that will work out fine. Also, the power management chip on the Overo CPU board already includes all the power management needed for the OTG port to work, there is little for me to do there other then provide a socket and a couple simple discrete resisters/capacitors.

The USB 2.0 port is a little more complex. The power management chip doesn't help with this port, but the Overo CPU board does include an SMSC PHY to handle most of the USB protocol. It still requires an external power management chip, and so I've placed a TPS61202 to manage power. The USB PHY provides the signals to select the power mode, so this won't be a software issue.

And finally, there are sockets. The USB 2.0 port gets a mini-A socket, and the USB OTG gets a mini-AB. This by the way preserves the option of having this device act as a USB target. I can't imagine why I would want to do that, but it's literally free.

The schematic sheet for the USB interface is processor_usb.sch, with a matching symbol processor_usb-1.sym.

Monday, May 16, 2011

Getting Back to it, and Headway on the LCD

It's been quite a while since I've been able to put any real effort into this project. I've been doing a lot of Icarus Verilog work, and keeping afloat only just barely with day-job work. But enough with my excuses. I've pushed into the git repository a bunch of changes to the schematics. The two major changes are that I'm dropping the ethernet interface and I've cleaned up the LCD interface.

Ethernet Is Out

This important change is a bit of scope reduction. Early on I thought to put every capability of the Gumstix board to use, whether it made sense or not. Some such capabilities include the myriad networking options. I specifically want to support WiFi in order to eliminate cabling between the telescope and the computer, but I was planning on including ethernet support as a high performance extra networking capability. The Pros:
  • Ethernet/TCP/IP is easy to set up,
  • It is potentially faster, and
  • It is reliable and free of interference.
But the Cons are:
  • The external chips/magnetics take up board space,
  • ethernet support a useless power draw when not in use,
  • and it is not as trivial to implement as I hoped.
After reflection, the costs in complexity, design effort and power consumption together seem to far out-weigh any potential benefits; and honestly the benefits to not really apply for this project. I can think of no instance where I would honestly rather use Ethernet over WiFi. So Ethernet support is out, and I've modified the processor_main.sch schematic to reflect that. (The processor_lan.sch sheet is still in git, but no longer referenced. I'll remove it eventually.)

LCD Progress

But I've also made positive progress on another front. With the processor_lan sheet out of the way, I've managed to focus some attention on the LCD display/touch screen support, and that can be found in the processor_lcd.sch schematic sheet. This sheet is both simple and tricky. It is simple because the OMAP processor includes built-in LCD panel support, so it is mostly a matter of hooking up the video signals from the processor to the LCD connector. There is not a whole lot of logical thought needed here.

But this is where it gets tricky: the processor signals are all 1.8V, whereal the LCD signals are at least 2.5V. Therefore, a bus transceiver is needed. I've chosen a TI SN74LVCH16T245 part to do the translation. I set it up so that the side connected to the CPU outputs is 1.8V, and the side connected to the LCD inputs is 3.3V. The transceiver chips take care of the rest. I need two transceivers of 16bits each to handle all the signals. (A few are left over. Should I tie them high or let them float?)

The touch screen controller is a little bit more helpful. The TI TSC2046 has flexible power options; what I do here is power it mostly with the 3.3V that otherwise powers the LCD, but give it a digital I/O power supply of 1.8V. Thus, the digital interface (mostly SPI) is directly compatible with the signals from the processor. The datasheet for the TSC2046 is a little vague about the touch interrupt handling, so I've taken the lead from the Palo43 schematics and added a weak pullup. Hopefully this will prevent spurious interrupts from the touch screen.

The LCD screen is illuminated by a built-in white LED back-light. I'm thinking of having a dial or slider that the user can use to control the brightness. I've thought about software controlled brightness, but frankly I would rather not have the brightness be part of a user interface on the screen. A manual brightness control is simply a variable resister in series with a limiting resister. I think simple is best here. Choose the limiting resister so that maximum brightness (zero variable resistance) gives me roughly 20mA at 3.3V. I'm not yet sure what the range of the variable resister should be, but I will want it to be pretty dim for night-time use. It might be a matter of trial and error.

I've tentatively settled on the LG 4.3inch LCD touch screen (LG part number LB043WQ2). It's not a huge display area (105mm x 67mm)  but should be plenty large enough for a rather fancy hand-held controller. It's about the size of a portable automotive GPS. This screen is also known to work with the Palo43 board from Gumstix, so I'll plan on getting a Palo43 and LG 4.3 display from Gumstix as a means to do early software development. Since most of the LCD interface is inspired by the Palo43, it should be software compatible.

How I've Been Cheating

I've mentioned the Gumstix Palo43 a couple times, so it's should be pretty clear by now that I have in fact been using that board as a crib for a lot of these design choices. This is not just a lazy cheat, but a purposeful one. By making the LCD interface (including the display and touch screen) functionally similar to the existing Palo43 board, I've assured myself that I will have a development platform that I can use to prototype a lot of my software. I get a known-working set of similar hardware that I can reference when my custom designs go awry, and I get a collection of known working drivers and other software to get me started with the porting work when the real board is ready.

What's Next?

The most obvious next tasks for the processor unit are the USB interface and the stepper motor controllers. The USB support will be pretty easy (although there will be some tricks to support a debug console) but the stepper motor support will take some thought. I'll be borrowing some more USB details from the Palo43 sample board to keep with software compatibility, but things will get interesting when I start designing the motor controllers.

Friday, November 6, 2009

Some Thoughts on the Processor Module

Back in my post on Broad Strokes, I described how I plan to divide the camera into a sensor module and a processor module. Since then I've rambled on about the sensor, but haven't said much about the processor. So let me run through my current thinking on the processor module.

I've settled on using a Gumstix Overo Air as a processing core. Gumstix as a company is very open source friendly, and all their products already have Linux ported and ready to go. Also, they publish full schematics for their various accessory boards. I can use these as design hints for the actual accessory board that I will have to make for the camera.

The Overo Air includes the processor, 256Meg SDRAM, 256Meg flash, a microSD card slot, and Wireless (802.11g) support. These alone make the board a reasonably complete Linux server. Load Linux on this board and I've got a lot of stuff for free. The Overo also includes connectors that carry many important processor signals out to an accessory board. This is where the application specific functionality gets added, so that's where my development effort needs to go.

Networking

The Overo Air COM board includes wifi already, so it is possible to declare victory right there. However, I've decided to throw in ethernet support. Although I expect the wireless support to be sufficient in most cases (and has the obvious advantage of being wireless), I think it prudent to include a wired network interface for a little extra speed and stability. Wifi in my home can sometimes flake on me when the neighbor turns on their routers; and besides, the wired link should be faster. So I will include ethernet support for extra stability, speed and flexibility.

Some Gumstix boards use the LAN9221 ethernet interface chip. This is a pretty nice chip as it includes the MAC and PHY for 100baseT in a single package. There are also known Linux drivers for this chip. All I need to do is add a socket, some magnetics and a few other discretes, and it's done. In fact, sockets are available with the magnetics and LEDs built in.

USB

I'm actually interested in USB host support. I have found optical filter changers that use USB as the source of power and control. I would like to be able to control these devices from the processor module, so the processor will need at least one USB Type A socket. The Type-A socket is the kind that you find on your computer and into which you plug your peripherals. So definitely I will include USB host support. And conveniently, the Overo includes most of the electronics for the USB host; beyond the socket and a few discretes, the problem is largely solved out of the box.

What about USB peripheral support? That would mean including a Type-B socket (like what you would find on a USB peripheral) and would allow one to connect to a host computer via USB. I've decided against it. It wouldn't take much to add the support (the Overo has the necessary signals) but with already two forms of networking supported, I don't think even more are required. The only possible advantage I might see is the ability to mimic other USB cameras and thus gain compatibility with some existing software.

I'm still wavering on the idea of also adding a USB connection to act as a serial console for the processor. This would require some extra parts (probably an FTDI FT232 or the like) to transparently connect a standard UART to USB. This would be darned handy when initially bringing up the board, but useless otherwise. So I'm undecided. Perhaps the thing to do is design it in, but leave the parts unpopulated if I decide not to do it.

User Interface

Most existing astronomy cameras have no user interface. Or, rather, all the user interface is on a host computer that drives the camera and collects the images. But my camera system will have the capacity for some autonomy. It would be completely silly to expect to do image post processing in the camera module (that is a job for when you get home) but there are some telescope operations that are best done without a computer getting in the way. I'm thinking specifically of focus adjustment and mount/tracker alignment. There may be other things as well, if the user interface support them.

A small LCD display with a touch screen seems perfect. A 480x272 display should be big enough to display a small start field (or large planet) along with some simple controls, or more complicated menus. A touch screen will allow all the input to be through drawn buttons. This should be plenty for doing many camera manipulations, and it is infinitely flexible, a highly desirable feature for open source projects.

So I will include the Samsung LTE430WQ LCD touch screen display. This is the device recommended by Gumstix (they sell them as accessories) so I'm confident that there will be available drivers. I'll be able to find reference schematics and other support, so not a lot of risk.

Motors

One of the tasks for the processor module is to provide optical tracking. To do this, it will need to be able to control the stepper motors of the mount, and for that it will need motor drivers. The Overo does not directly include stepper motor controls, but they should be relatively easy to implement in an accessory FPGA, along with some external circuitry to safely power the motors. This will have to be somewhat configurable in order to be able to handle different mounts. That can probably be handled by jumpers/pots to select voltages, and software to adjust step timings and calibrate angles.

To get full motion control, the controller will need to be able to drive at least 2 motors: R.A and declination. I also plan on adding two more motor drivers, one for possible focus control, and another for random mechanical accessories that I haven't thought of yet.

Power

And of course all of this needs power. The processor module will receive DC power from an external source, probably a rechargeable 12V battery. Besides taking power for itself, the processor will create the right voltages and power up the sensor module, the stepper motors and and USB peripheral. I will also need some sort of power control circuitry to allow a clean start/stop of the various units. This part could get interesting as I am no master of this subject. All I know for certain is that the processor module will take in battery power and turn it into clean, filtered supply voltages for the rest of the system.

Summary

The processor board clearly hosts a lot of the complexity of the Icarus Astronomy Camera. Fortunately, the Gumstix Overo bundles up a lot of the pieces. It provides the processor and the system basics, and gives me a head start on most of the peripherals. A lot of the remaining bits can be handled using off the shelf components, and many implementation questions can be answered by reference schematics. Incidentally, I had considered adding a kitchen sink, but concluded that functionality was already covered elsewhere.

Monday, October 19, 2009

Open Hardware Open Source

This is a slight digression from the technical details of the camera itself, and into matters of infrastructure. Specifically, I'd like to spell out my choices for development and sharing tools. A goal of this project is to share the source code (i.e. design files) openly; but rather then complete the project and release only the final result, I intend to make the design files available as I work on them. That obviously means that the tools I use to manipulate the source files need to be tools available to potential users. The easiest way to manage that is to user open source tools wherever possible. And for electronics design and embedded programming, open source tools get me pretty far.

Schematics and PCB Layouts

This suite of tools is going to be the first to be put to use, as I'm going to be drawing at least partial schematics before anything else. There are a few open source schematic editors available: KiCAD, Electric and GEDA/gschem are common examples. Personally, I choose GEDA/gschem because that's what I'm most comfortable with. I also know that the integration with PCB, the circuit board layout editor, is thorough and advanced. So I see no reason to change my habits.

Logic Design

The camera system as a whole will have at least 2 FPGA devices: one in the camera module and one in the processor module. The design that goes into these devices will be described in Verilog, with the designs simulated by Icarus Verilog and synthesized by the vendor tools for the chosen device. Since the chosen devices are likely to be Xilinx parts, the vendor synthesizer is likely to be WebPACK. (Altera does not support Linux with its web edition software. Big strike against Altera.)

Embedded Systems/Software

As I mentioned back in an earlier blog entry, the processor module will likely be built around a Gumstix single board computer. That right there implies the embedded runtime operation system is Linux. The tools and techniques for programming embedded Linux are well known and mature. There is no need right now to pin it down any further then that.

Source Code Control/Distribution

The source code control system will need to carry schematics, printed circuit board layouts, Verilog design files, and C/C++ source code for the processor module, as well as assorted text files for various other purposes. The source code control should also act as a repository, that helps make all these files available to interested users. There is a convenient hosting service at GitHub that I've used in the past for other purposes. This service (and others like it) offer storage and services for public "git" repositories. GitHub in particular has far better performance then sourceforce.net, and has been reliable for other projects that I've hosted there.

The git URL for the astrocam project is "git://github.com/steveicarus/astrocam.git". The quick way to get the current sources, then, is:
git clone git://github.com/steveicarus/astrocam.git
For now, that is where the source lives. There is very little there now, but that is where it will go. Text files within the distribution directory will describe the finer details of how to view/compile the source files.

Thursday, October 15, 2009

Moving Data Between Modules

In this post, I'll describe how I plan to link the camera sensor module to the processing module. Recall that I want the processor module and sensor module physically separate, with the sensor module compact and light weight; so naturally I'm after a cable link that can use common cables without bulky connectors. The link needs to be fast enough to carry video down from the sensor and commands up from the processor, and simple enough to use that a tiny FPGA with simple logic can operate the sensor end. (In fact, I also plan to program the FPGA on the sensor module through this link.) It should also be reliable and fault tolerant.

But first, how much data do I need to transfer, and how fast? The KAI-04022 sensor (See my previous blog entry) has a maximum pixel rate of 40MHz, and each pixel will be 12 bits. Add to each pixel a few bits for framing (at least 3 bits) and we get a target rate of around 640MBits/sec. The link from the sensor to the processor must be at least that fast, because there are no plans for any storage or significant buffering capacity on the sensor module. In the other direction, there is not much need to send data from the processor to the sensor. There will be an FPGA to load, but that's a one-time thing and speed is not critical. The only other data up are commands to configure the hardware and start captures. What I can say for sure at this point is that the uplink will demand much less then the downlink, but it will also turn out that an uplink the same speed as the downlink is easy and convenient. The video speed is therefore the driving consideration.

I've chosen the National Semiconductor DS92LV16 16-Bit Bus LVDS Serializer/Deserializer to handle the video link. This chip can take a stream of 16-Bit words clocked by a 25-80MHz clock, and serialize them onto a single LVDS pair. Route that pair over a cable, and a matching receiver chip can deserialize the signal back to a stream of 16-Bit words with the recovered clock. Each chip contains both a serializer and deserializer, so with two identical chips (one in the sensor module and one in the processor module) and 4 wires arranged as 2 twisted pairs I can have a full-duplex connection between the camera module and the processor module. Given the desired video rate, it makes sense to run this whole business at 40MHz, to get 40*16 = 640MBits/sec.

At the board level, the DS92LV16 is very easy to use. The digital interface is a pair of 16bit wide synchronous word streams; one stream from the remote, and the other to the remote. Super simple. There are also some link control pins. The LOCK* pin signals receiver lock and the SYNC pin controls transmitter resynchronization. Connect the LINK* pin to the SYNC pin and the entire link initialization can be controlled remotely by the processor module. The data bits can also be connected directly to an FPGA in such a way that after link-up the processor can load the FPGA configuration stream from reset. This chip is simple enough to operate; it can be used to remotely bootstrap the sensor module as a whole. Nice!

The Design Guide from National Semiconductor says that I can use CAT5 twisted pair ethernet cable with RJ-45 connectors to carry the LVDS pairs. I only need to operate at 40MHz, so cables as long as 16 meters can work. That is plenty. More likely, I'll be using 1 or 2 meter long cables. The RJ-45 connectors are compact and cheap, and the cable itself carries 8 conductors arranged as 4 pairs. I'll use 2 of the pairs to carry the high speed data in both directions. That is plenty of high speed data, so the remaining 2 pairs can be set aside for low speed signals. I'm thinking 1 of the extra pairs can carry a hard reset signal from the processor module to the sensor module. The remaining pair, for now, I'll leave unconnected. Later in the design process, I might find a better use for it. (No, I do not think it can carry power.)

So the plan is simple. Use 2 DS92LV16 chips, one in the processor module and one in the sensor module, and CAT5 ethernet cables to physically carry the links. Clock both directions at 40MHz to reduce the number of different frequencies running around and simplify the overall design. Wire the DS29LV16 in the sensor module so that the processor module can remotely initialize both link directions, and wire the FPGA on the sensor module to the DS92LV16 so that the processor module can use the link to configure the FPGA. And finally, use one of the pairs in the CAT5 cable to carry a hard reset signal. That takes care of the data link from the sensor module.

Tuesday, September 15, 2009

All Good Cameras Have a Sensor

I spent some time choosing a sensor early in the project, as much to test the feasibility of the project as anything. The CCD sensor will make or break the whole project, so as a sanity test, I decided to go ahead and do the research to choose a chip that fits the application.

The sensor of choice for this project needs to be usable in a camera system that does not include a mechanical shutter. That requirement alone narrows the field. I found that the sensors that go into your basic digital camera would not work because they need that mechanical shutter. Specifically, they need to be covered while the captured image is read out of the sensor. Yes, you can connect your DSLR camera to a telescope, but that camera body includes the mechanical shutter. But then again, newer digital cameras have video modes, so the right sensors for this project surely exist.

We also want our sensor to have an area of around 4Meg pixels. This eliminates most cheap webcam sensors. In fact, 4Meg is large even for HD video. That pretty much rules out webcam sensor chips. We are getting into the realm of scientific imaging sensors.

Fortunately, Kodak has a whole range of sensors, the Interline CCD family, that fits the bill nicely. Not only do these chips support readout while the sensor is uncovered, they have an electronic shutter that allows for precise control of integration time. The KAI-04022 in particular is the right size (4meg) and is square (2048x2048). That is a nice feature. (Square pegs fit better in round holes then do rectangular pegs.) These sensors are indeed intended for scientific imaging. Looking good.

The pixel density of the KAI-04022 is 7.4um square, which gives approximately 1.6 arc-seconds per pixel and a total of 55 arc-minutes field of view when put behind my 1000mm focal length telescope. The Dawes limit for this 120mm refractor is 0.98arc-seconds, so this strikes me as a good compromise. I can add a barlow lens if I really want to push the resolution. If I want to increase the field of view, I'll just need a shorter telescope. There are physically larger sensors in the series, but they are getting into the 8-10MPixel range. Maybe later.

The chip is able to read out pixels at around 40MPixels/second, which gives about 10 frames per second. This is plenty fast enough for a plausible focus mode. It's not fast enough for full-motion video, but that's fine for what I'm after.

And the Kodak chips are well documented, with thorough datasheets, and evaluation boards with complete schematics. That's good.

Finally, I checked with Kodak sales, and the KAI-04022 is currently in production. There are color and grayscale versions of the chip, so I have some flexibility there, and for small quantities the chips themselves can be purchased directly from Kodak for something in the range of $1,000.00. A lot of money, but within the total budget for the camera as a whole, so that's what it will be.

By the way, it turns out that many store-bought astronomy cameras in the $3000+ price range use these or similar chips as well, so that gives me some extra confidence that I'm not too far off the mark.

Sunday, September 13, 2009

Broad Strokes

In my getting started post, I described the design goals for this camera. I can tell you right off that one particular requirement somewhat contradicts all the others. How does one shove all that functionality into a lightweight sensor module? That also begs the preliminary question "How light?"

The answer: as light as possible. Remember, the sensor is going to attach to the telescope far away from the balance point for the optical tube. Every gram is multiplied by the moment arm away from the center of mass, becomes a burden on tracking motors, and can therefore be a real irritation in the field. When I attach a Nikon D80 camera body to my telescope (Celestron XLT120 refractor) I practically have to remount the tube to the equatorial mount to re-balance it. Heck, even fiddling with the focus might cause the balance to be thrown out, never mind adding a barlow lens. So I'm motivated to keep it light, light, light. (Can I get it as light as an eyepiece? Probably not, but that's an interesting impossible goal.)

Fortunately, the first major design decision follows rather naturally. The whole device can be broken into two major parts: the sensor module and the processing module. The sensor module need only contain the sensor chip and the bare minimum electronics to send the signal out through a cable to the processing module. The processing module can contain all the featurisms of the camera, and can be strapped to the telescope near its center of mass. The size and weight of the sensor module can be minimized by removing functionality to the processing module.

The sensor module needs to hold very little stuff. I can probably keep it down to only the CCD sensor, a few ASIC chips (including a tiny FPGA) and the discrete parts needed to handle some non-digital bits. It should be possible to keep the sensor circuit board down to little more then twice the area of the sensor itself. The sensor chip is going to be pretty large as semiconductor devices go, so this shouldn't be much of a challenge. Keeping chip count low (and hopefully minimize the power draw) should have the added advantage of limiting the heat sources in the sensor assembly. Sensors like cold.

The processing module need not have the size/weight limits, so I can go crazy there. And crazy is where I shall go. The basis can be a Gumstix running Linux. This gets me a whole host of possibilities for free, including networking software and hardware. To that I will attach an interface board that includes the electronics needed to communicate with the sensor board. I can also include on the interface board the components needed to manage power supplies (the sensor board will likely have tricky power needs) and even interface to stepper motors. This will allow me to interface not only to the sensor board, but with mount motors.

Actually, there will probably be a third box. The battery pack will likely be something off the shelf that generates some standard voltages (DC) that the processing module interface board can convert to the voltages needed to run the whole system. I like the idea of having Li-ion power for the whole system, but adding the weight of battery cells to the processor module really is a bit over the top. So batteries get a module of their own.

So that's the very broad stroke. A sensor module will contain the CCD sensor and minimal support electronics; a processor module will contain the interesting image processing hardware/software, the interfaces to the mount, and some power supply management; and a battery pack will supply the power. Next post, I will probably discuss my choice of sensor chip.