Monday, October 19, 2009

Open Hardware Open Source

This is a slight digression from the technical details of the camera itself, and into matters of infrastructure. Specifically, I'd like to spell out my choices for development and sharing tools. A goal of this project is to share the source code (i.e. design files) openly; but rather then complete the project and release only the final result, I intend to make the design files available as I work on them. That obviously means that the tools I use to manipulate the source files need to be tools available to potential users. The easiest way to manage that is to user open source tools wherever possible. And for electronics design and embedded programming, open source tools get me pretty far.

Schematics and PCB Layouts

This suite of tools is going to be the first to be put to use, as I'm going to be drawing at least partial schematics before anything else. There are a few open source schematic editors available: KiCAD, Electric and GEDA/gschem are common examples. Personally, I choose GEDA/gschem because that's what I'm most comfortable with. I also know that the integration with PCB, the circuit board layout editor, is thorough and advanced. So I see no reason to change my habits.

Logic Design

The camera system as a whole will have at least 2 FPGA devices: one in the camera module and one in the processor module. The design that goes into these devices will be described in Verilog, with the designs simulated by Icarus Verilog and synthesized by the vendor tools for the chosen device. Since the chosen devices are likely to be Xilinx parts, the vendor synthesizer is likely to be WebPACK. (Altera does not support Linux with its web edition software. Big strike against Altera.)

Embedded Systems/Software

As I mentioned back in an earlier blog entry, the processor module will likely be built around a Gumstix single board computer. That right there implies the embedded runtime operation system is Linux. The tools and techniques for programming embedded Linux are well known and mature. There is no need right now to pin it down any further then that.

Source Code Control/Distribution

The source code control system will need to carry schematics, printed circuit board layouts, Verilog design files, and C/C++ source code for the processor module, as well as assorted text files for various other purposes. The source code control should also act as a repository, that helps make all these files available to interested users. There is a convenient hosting service at GitHub that I've used in the past for other purposes. This service (and others like it) offer storage and services for public "git" repositories. GitHub in particular has far better performance then sourceforce.net, and has been reliable for other projects that I've hosted there.

The git URL for the astrocam project is "git://github.com/steveicarus/astrocam.git". The quick way to get the current sources, then, is:
git clone git://github.com/steveicarus/astrocam.git
For now, that is where the source lives. There is very little there now, but that is where it will go. Text files within the distribution directory will describe the finer details of how to view/compile the source files.

Thursday, October 15, 2009

Moving Data Between Modules

In this post, I'll describe how I plan to link the camera sensor module to the processing module. Recall that I want the processor module and sensor module physically separate, with the sensor module compact and light weight; so naturally I'm after a cable link that can use common cables without bulky connectors. The link needs to be fast enough to carry video down from the sensor and commands up from the processor, and simple enough to use that a tiny FPGA with simple logic can operate the sensor end. (In fact, I also plan to program the FPGA on the sensor module through this link.) It should also be reliable and fault tolerant.

But first, how much data do I need to transfer, and how fast? The KAI-04022 sensor (See my previous blog entry) has a maximum pixel rate of 40MHz, and each pixel will be 12 bits. Add to each pixel a few bits for framing (at least 3 bits) and we get a target rate of around 640MBits/sec. The link from the sensor to the processor must be at least that fast, because there are no plans for any storage or significant buffering capacity on the sensor module. In the other direction, there is not much need to send data from the processor to the sensor. There will be an FPGA to load, but that's a one-time thing and speed is not critical. The only other data up are commands to configure the hardware and start captures. What I can say for sure at this point is that the uplink will demand much less then the downlink, but it will also turn out that an uplink the same speed as the downlink is easy and convenient. The video speed is therefore the driving consideration.

I've chosen the National Semiconductor DS92LV16 16-Bit Bus LVDS Serializer/Deserializer to handle the video link. This chip can take a stream of 16-Bit words clocked by a 25-80MHz clock, and serialize them onto a single LVDS pair. Route that pair over a cable, and a matching receiver chip can deserialize the signal back to a stream of 16-Bit words with the recovered clock. Each chip contains both a serializer and deserializer, so with two identical chips (one in the sensor module and one in the processor module) and 4 wires arranged as 2 twisted pairs I can have a full-duplex connection between the camera module and the processor module. Given the desired video rate, it makes sense to run this whole business at 40MHz, to get 40*16 = 640MBits/sec.

At the board level, the DS92LV16 is very easy to use. The digital interface is a pair of 16bit wide synchronous word streams; one stream from the remote, and the other to the remote. Super simple. There are also some link control pins. The LOCK* pin signals receiver lock and the SYNC pin controls transmitter resynchronization. Connect the LINK* pin to the SYNC pin and the entire link initialization can be controlled remotely by the processor module. The data bits can also be connected directly to an FPGA in such a way that after link-up the processor can load the FPGA configuration stream from reset. This chip is simple enough to operate; it can be used to remotely bootstrap the sensor module as a whole. Nice!

The Design Guide from National Semiconductor says that I can use CAT5 twisted pair ethernet cable with RJ-45 connectors to carry the LVDS pairs. I only need to operate at 40MHz, so cables as long as 16 meters can work. That is plenty. More likely, I'll be using 1 or 2 meter long cables. The RJ-45 connectors are compact and cheap, and the cable itself carries 8 conductors arranged as 4 pairs. I'll use 2 of the pairs to carry the high speed data in both directions. That is plenty of high speed data, so the remaining 2 pairs can be set aside for low speed signals. I'm thinking 1 of the extra pairs can carry a hard reset signal from the processor module to the sensor module. The remaining pair, for now, I'll leave unconnected. Later in the design process, I might find a better use for it. (No, I do not think it can carry power.)

So the plan is simple. Use 2 DS92LV16 chips, one in the processor module and one in the sensor module, and CAT5 ethernet cables to physically carry the links. Clock both directions at 40MHz to reduce the number of different frequencies running around and simplify the overall design. Wire the DS29LV16 in the sensor module so that the processor module can remotely initialize both link directions, and wire the FPGA on the sensor module to the DS92LV16 so that the processor module can use the link to configure the FPGA. And finally, use one of the pairs in the CAT5 cable to carry a hard reset signal. That takes care of the data link from the sensor module.