Thursday 5 July 2012

Bluetooth HCI

Bluetooth HCI (Host Controller Interface)

The Bluetooth specification includes the definition of an interface (the Host Controller Interface) for the hardware of a Bluetooth module. It defines the interface between the HCI Driver (on Micro-controller) and the Host Controller firmware (on the Bluetooth module).

HCI Driver is also known as the HCI Host.

The physical link for transmitting HCI packets between an HCI Driver and Host Controller firmware, via a Host Controller Transport Layer, are:
  • USB (Universal Serial Bus).
  • RS232 (standard serial port), with error detection and recovery.
  • Generic UART, assuming no errors.
HCI Command Packetscommands are issued by the HCI Driver to the Host Controller:
  • Packet indicator (for UART interfaces) of 1.
  • Op-code (16 bits): identifies the command:
    OGF (Op-code Group Field, most significant 6 bits);
    OCF (Op-code Command Field, least significant 10 bits).
  • Parameter length (8-bit): total length of all parameters in bytes.
  • Command parameters: the number of parameters and their length is command specific.
HCI Data Packets for ACL data:
  • Packet indicator (for UART interfaces) of 2.
  • Control information (16 bits):
    Broadcast flag (most significant 2 bits):
    00 = point-to-point packet (no broadcast);
    01 = Active Slave Broadcast;
    10 = Parked Slave Broadcast.
    Packet boundary flag (2 bits):
    01 = continuing packet of a higher level message;
    10 = first packet of a higher level message.
    Connection handle (least significant 12 bits).
  • Data length (16 bits): total length of data in bytes.
  • Data.
HCI Data Packets for SCO data:
  • Packet indicator (for UART interfaces) of 3.
  • Connection handle (12 bits).
  • Unused (4 bits).
  • Data length (8 bits): total length of data in bytes.
  • Data.
HCI Event Packets: the Host Controller notifies the HCI Driver of events:
  • Packet indicator (for UART interfaces) of 4.
  • Event code (8 bits): identifies the event.
  • Parameter length (8-bit): total length of all parameters in bytes.
  • Event parameters: the number of parameters and their length is event specific.
Commands are processed asynchronously, so the completion of a command (and its return data) is reported by a Command Complete event. Commands may be processed in parallel, so a later command may complete before an earlier command.


HCI ERROR CODE DESCRIPTIONS

HCI error code descriptions can be implementation-dependent; whether the error should be returned using a Command Status event or the event associated with the issued command (following a Command Status event with Status=0x00). In these cases, the command can not start executing because of the error, and it is therefore recommended to use the Command Status event. The reason for this suggested course of action is that it is not possible to use the Command Status event in all software architectures.

The Bluetooth module firmware will handle the Link Management Protocol (LMP). If local sends LMP_au_rand and remote returns LM_not_accepted, the Bluetooth module will send a HCI event to inform the HCI Driver, with the error code of 0x05. See below:

AUTHENTICATION FAILURE (0x05)

The’Authentication Failure’ error code is returned by the Host Controller in the Status parameter in a Connection Complete event or Authentication Complete event when pairing or authentication fails due to incorrect results in the pairing/ authentication calculations (because of incorrect PIN code or link key).

"The 'Authentication Failure' error code can also be used as a value for the Reason parameter in the Disconnect command (as a reason code). The error code will then be sent over the air so that it is returned in the Reason parameter of a Disconnection Complete event on the remote side. In the Disconnection Complete event following a Command Status event (where Status=0x00) on the local side on which the Disconnect command has been issued, the Reason parameter will however contain the reason code 'Connection Terminated By Local Host'.

Example of  Authentication Failure in Connection Complete:

> HCI Event: Connect Request (0x04) plen 10
bdaddr 00:0D:44:2F:8C:C0 class 0x20040c type ACL
< HCI Command: Accept Connection Request (0x01|0x0009) plen 7
bdaddr 00:0D:44:2F:8C:C0 role 0x01
Role: Slave
> HCI Event: Command Status (0x0f) plen 4
Accept Connection Request (0x01|0x0009) status 0x00 ncmd 1
> HCI Event: Link Key Request (0x17) plen 6
bdaddr 00:0D:44:2F:8C:C0
< HCI Command: Link Key Request Reply (0x01|0x000b) plen 22
bdaddr 00:0D:44:2F:8C:C0 key C9295D90AF0463360C848B55881C2A11
> HCI Event: Command Complete (0x0e) plen 10
Link Key Request Reply (0x01|0x000b) ncmd 1
status 0x00 bdaddr 00:0D:44:2F:8C:C0
> HCI Event: Connect Complete (0x03) plen 11
status 0x05 handle 1 bdaddr 00:0D:44:2F:8C:C0 type ACL encrypt 0x00
Error: Authentication Failure


Wednesday 4 July 2012

Bootloader and U-Boot

Bootloader:

Assuming you want to have a bootloader that can boot-up linux kernel.
A very basic bootloader would do a simple memory read/write test, initialise Uart, copy the kernel image from flash to ram, then perform a jump to the kernel starting address.

To expand the basic bootloader, we can add:
- wait for a few seconds for user interruption when booting
- add a loop to accept and process user input
- configure more on-board hardware resources, such as Ethernet controller, Sata controller, Usb controller

ROM Bootloader:

Certain micro-controller contains ROM Bootloader (RBL), which resides in the ROM of the micro-controller. The RBL supports booting from various memory devices (master mode), or from external master (slave mode). When using external master, the PC is used as the UART boot master via UART boot mode. The RBL supports limited number of NAND devices.

Upon reset or power-up, the micro-controller begins executing code from its RBL. The RBL read BOOTCFG register to determine the boot mode. 

Normally, a secondary boot loader, known as User Boot Loader (UBL) would be loaded by RBL. UBL could be used to initialise the memory controller, setting system clock, and download a tertiary bootloader, such as U-Boot.

In NAND boot mode, the RBL requires UBL to exist in the NAND memory, to load the tertiary boot loader from NAND to RAM. In NOR boot mode, the UBL is not required. The RBL locates the UBL image, via magic number in the UBL header. Then RBL copy UBL image to IRAM, then begin executing it.

The UBL will locate the tertiary bootloader image, load it from NAND to RAM, then jump to the entry point address.

U-Boot:

U-Boot is a popular open source bootloader. It has been ported to ARM, PPC, MIPS platform.


  • For PPC porting, modify files in:

cpu/mpc8xxx/
board/freescale/mpc8xxx/
lib_ppc/
include/asm-ppc

  • For ARM porting, modify files in:

cpu/arm926xxx/
board/mv88xxxxx/db88xxxxxx/
lib_arm/
include/asm-arm

important files:
common/main.c - contain main_loop() function
common/cmd_bootm.c - contain bootm command handling
post/tests.c - contain array of functions for post tests
include/configs/MPC8xxx.h - contain u-boot env settings, such as bootargs
include/configs/db88xxxxxx.h

Example: Building U-Boot:
make MPC8xxx_NAND_config    (for PPC)
make

make distclean
make db88xxxxxxxx_config  (for ARM)
make

U-Boot on ARM9 boot up sequence:

cpu/arm926xxx/start.S

  • reset
  • early DRAM init
  • relocate u-Boot to DRAM
  • stack_setup
  • clear_bss
  • call start_armboot


lib_arm/board.c

  • start_armboot()
  • (*init_fnc_ptr)() - looping thru init_sequence[] array
    • cpu_init
    • board_init
    • interrupt_init
    • serial_init
    • console_init_f
    • display_banner
    • dram_init
  • nand_init()
  • devices_init()
  • console_init_r()  - set stdin, stdout, stderr
  • for (;;) main_loop() - parsing user input

DRAM Initialisation:

Set Icache
Twsi Init
Dram Interface Detect
Read from On-board DRAM or DIMMS
Dram Interface Config
Get total size

DRAM Serial Presence Detect (SPD) in U-Boot:

For DRAM on DIMMS, we can read its EEPROM to extract the SPD data
1) Setup the twsi slave address, type, offset
2) Read data from the twsi
3) calculate checksum, compare checksum
4) analyze the SPD data, such as DIMM type, row and col addr, number of banks, data width, cycle time, refresh interval, burst length, CAS latency, DIMM bank density, memory size

Add new commmand:

common/cmd_xxx.c
include/commands.h
   #define U_BOOT_CMD

use U_BOOT_CMD macro to fill in cmd_tbl_t structure

U-Boot boot settings:

For example, a board with 64MB RAM, and booting from network, using ramdisk.

bootargs=root=/dev/ram0 rw initrd=0xc2000000,16M console=ttyS2,115200n8
ramboot=tftp 0xc2000000 rootfs.ext2.gz;tftp 0xc0700000 uImage;bootm 0xc0700000
bootcmd=tftp 0xc2000000 rootfs.ext2.gz;tftp 0xc0700000 uImage;bootm 0xc0700000




Tuesday 3 July 2012

Linux Network Driver Interrupt Mitigation


Introduction:
NAPI ("New API") is a modification to the Linux driver packet processing framework, which is designed to improve the performance of high-speed networking. NAPI works through:
Interrupt mitigation

High-speed networking can create thousands of interrupts per second, all of which tell the system something it already knew: it has lots of packets to process. NAPI allows drivers to run with (some) interrupts disabled during times of high traffic, with a corresponding decrease in system load.

Packet throttling
When the system is overwhelmed and must drop packets, it's better if those packets are disposed of before much effort goes into processing them. NAPI-compliant drivers can often cause packets to be dropped in the network adaptor itself, before the kernel sees them at all.
NAPI was first incorporated in the 2.5/2.6 kernel. The use of NAPI is entirely optional.
Real World Example of NAPI network driver:
In driver init function:
  netif_napi_add(xxx, napi, napi_poll_func, weight)
In driver open function:
  setup the irq routine
  napi_enable(napi)
In irq routine:
  napi_schedule(napi)
In napi_poll_func:
  do the actual packet reception
  call netif_receive_skb(skb) to send packets to linux kernel for further processing. If the packet received is IP packet, it will be forwarded to linux's packet filtering framework.
  if (all packets are received)
    napi_complete(napi)

napi polling function will be called when there is sufficient packets being received on the network card. The network card must have enough RAM or DMA ring to store the packets.
  

Monday 2 July 2012

A Tale of Two Linux Routers

By default, administrators just define a single, default router (which is eth0). So for a Linux system with two network interface cards —  eth0 and eth1, and if we receive traffic (i.e., ICMP pings) on eth1, the return traffic will go out on eth0.

In short, this post will explain how to ensure traffic going into eth1 goes out only on eth1, as well as enforce all traffic going into eth0 goes out only on eth0.
Assuming we have the following network setup:
  • eth0 - 10.10.1.10 netmask 255.255.255.0
  • eth0's gateway is: 10.10.1.254
  • eth1 - 192.168.7.7 netmask 255.255.255.0
  • eth1's gateway is: 192.168.7.1
First, we need to make sure the Linux kernel has support for “policy routing” enabled.
During the kernel compilation process, we need to:
cd /usr/src/linux
make menuconfig
Select "Networking --->"
Select "Networking options --->"
[*] TCP/IP networking
[*] IP: advanced router
Choose IP: FIB lookup algorithm (FIB_HASH)
[*] IP: policy routing
[*] IP: use netfilter MARK value as routing key
Next, we need to download, compile, and install the iproute2 utilities. (Most Linux distributions have binary packages for this utility.) Once installed, typing ip route show should show the system’s routing table. 
To check the system’s initial route configuration:
# netstat -anr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.7.0  0.0.0.0  255.255.255.0 U 0 0 0 eth1
10.10.1.0  0.0.0.0  255.255.255.0 U 0 0 0 eth0
0.0.0.0  192.168.7.1  0.0.0.0 UG 0 0 0 eth1
So, basically, the system is using eth1 as the default route. If anyone pings 192.168.7.7, then the response packets will properly go out eth1 to the upstream gateway of 192.168.7.1. But what about pinging 10.10.1.10? The incoming ICMP packets will properly arrive on eth0but the outgoing response packets will be sent out via eth1! That is not good.
To fix this issue, we need to create a new policy routing table entry within the /etc/iproute2/rt_tables. We call it table #1, named “admin” (for routing administrative traffic onto eth0).
# echo "1 admin" >> /etc/iproute2/rt_tables
Then, we are going to add a few new entries within this “admin” table. Specifically, we provide information about eth0‘s local /24 subnet, along with eth0‘s default gateway.
ip route add 10.10.1.0/24 dev eth0 src 10.10.1.10 table admin
ip route add default via 10.10.1.254 dev eth0 table admin
At this point, we have created a new, isolated routing table named “admin” that is not used by the OS just yet. Because we still need to create a rule referencing how the OS should use this table. For starters, type ip rule show to see your current policy routing ruleset. Here’s what an empty ruleset looks like:
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Without going into all the boring details, each rule entry is evaluated in ascending order. The main gist is that the normal main routing table appears as entry 32766 in this list. (This would be the normal route table we see when we type netstat -anr.)
We are now going to create two new rule entries, that will be evaluated before the main rule entry.
ip rule add from 10.10.1.10/32 table admin
ip rule add to 10.10.1.10/32 table admin
Typing ip rule show now shows the following policy routing rulesets:
0: from all lookup local
32764: from all to 10.10.1.10 lookup admin
32765: from 10.10.1.10 lookup admin
32766: from all lookup main
32767: from all lookup default
Rule 32764 specifies that for all traffic going to eth0‘s IP, make sure to use the “admin” routing table, instead of the “main” one. Likewise, rule 32765 indicates that for all traffic originating from eth0‘s IP, make sure to use the “admin” routing table as well. For all other packets, use the “main” routing table. In order to commit these changes, it’s a good idea to type ip route flush cache.
So the system should now be able to properly route traffic to these two different default gateways. 
Update: Here are some additional resources, that I have found useful.
http://lartc.org/howto/lartc.rpdb.multiple-links.html
http://linux-ip.net/html/routing-tables.html