Search This Blog

Friday, October 22, 2010

Antenna Deployment Subsystem Design

Back after a long time....

Here is what I designed for the antenna deployment subsystem which happens to be a preliminary stage of Studsat team recruitment in my college....

                Antenna Deployment Subsystem for a Cube Sat

Abstract:
          The core idea of this document is the proposal of an Antenna deployment system for a cube sat. Since my knowledge in the field of antennas is very limited, I have tried to bring up some concepts for the purpose of antenna deployment and you may feel it a bit impractical to realize. I have tried to focus on two major stages for this proposal. The first would be a mere illustration of the various deployment mechanisms in practice and their relevance for my proposal. The next stage will be a detailed illustration for developing the system and the design is as per my perspective of the ecology of the satellite and the launch conditions.
 Summary of the intended plan:
          To start off with the first phase, are several constraints like space, mass, power consumption and reliability. To overcome all these constraints, we have to design the antenna and the related subsystems to have minimum mass and occupy less volume. Also they must be able to withstand the high acceleration of the launch vehicle and the harsh conditions of the space. Now the actuators which cause the deployment of the antennas to its fullest spread must be able to mechanically hold the antenna in spite of the high G's experienced during takeoff and also must not consume any energy during this phase as some launch specifications require complete electrical shutdown.
            One type of actuator is the magnetic actuator which is a very simple concept and easy to implement. It consists of a permanent magnet holding the antenna strips during launch owing to zero electricity requirements and as soon as the satellite is thrown off the payload capsule, an electric current surges which in turn generates the required magnetic field to oppose that of the permanent magnet. Thus there would be an easy deployment of the antennas which comply with all the pre-launch requirements. This is also pretty good method which provides the feature of reusability during testing phase. Coming to the second type of actuator, there is the one-time-use melting wire actuator. This happens to be much more light compared to the magnetic actuator. Tests have proved that materials like nichrome which in its wire form of approximately 2mm diameter and 4mm length can melt down and break when voltages as low as 4V is provided with a current as less as 0.9A. In this type, a nylon wire holds up the antenna in a stowed position. The Nichrome coil is tightly wound across the nylon wire and as soon as the satellite is launched into space, a high power is delivered to the coil to melt down the wire and cut the nylon wire so that the antenna can resume its operations. The antennas would swing back to its actual position once the wire is cut due to the elastic inertial forces.
            Now moving onto the second phase, I would wish to express my design for the deployment system. Now when compared to the above two methods of antenna deployment, the melting wire actuator seems to be more suitable for the reason that it takes up less mass and volume and the intended target can be achieved at a lower electrical effort. For the design of this subsystem, I would wish to take into consideration the following factors. The first being the power consumption, the second being the size of the actuating circuit and the last being the method of actuation and initiation for other subsystems to start their functionalities as the antenna has been deployed.
            Let me assume that there is a separate subsystem control circuit allocated for just the antenna deployment. Taking power consumption into consideration, this subsystem needs the power only after the ejection of the satellite from the payload capsule. The main source of power is the energy harvested from the onboard solar cells and this might fetch up to 30mW/cm2 and hence a low power device would be suitable for this purpose. A microcontroller like MSP430 from TI would be optimal as it has higher performance, lower space (14 Pin SMD devices – TSSOP being the smallest) and suitable for small applications. This particular microcontroller works at 3.3V logic and can operate in voltages as low as 1.8V. The sleep mode current consumed by it is as low as 40nA and this would be optimal for the entire satellite's power requirements because MSP430 in my design would be inactive for the rest of the time after deployment and it is essential to ensure that it consumes almost no power. I would be using MSP430F2013 version of MSP430 as I have a practical experience in handling it on a debugger provided by TI called EZ430-F2013. Now coming to the design part,
1)                         Since no other system will be online in the satellite, it is up to the MSP430 to deploy the antenna and initiate all other systems. As there is no problem with the power requirements in the perspective of MSP430, the device will be instantly ready. The controller is assumed to derive power directly from a definite array of solar cells which ensures the peak voltage is below a specified limit and since the device internally has a regulator, there is no need for voltage regulation. We would generate a certain delay in the controller using 16bit timer before the controller could actually start its deployment work just to ensure that the solar cells are ready to output maximum power.
2)      After the controller, the next part of the circuit is the relay. I would wish to use a small MOSFET as it reduces the mass of the module. An 8V, 1A MOSFET would be sufficient to serve the purpose as the melting of the nichrome wire requires less than 1A at 5V.
3)      Now, as soon as the controller finishes a certain delay (delay is determined by trial and error method after simulating) it is made to actuate the MOSFET. The controller has one 8bit highly multiplexed port and definitely one pin can drive the gate of the MOSFET which would turn it on. So here I would be using a MOSFET to perform the job of switching or controlling the power flow between the solar cells and the nichrome wire. Since none of the device will be on till the antenna is deployed, we can ensure that maximum power is delivered to the actuator to perform the required melting of the wire.
4)      We shall ensure that the controller will drive the MOSFET for a certain duration which is known during testing and hence ensure that the antennas are successfully deployed. Once this job is done, the MSP430 now has to ensure that all other subsystems of the satellite are turned on.
5)      This process of actuation can be controlled via 7 other pins of MSP430. Another option is to have a common one time operable switch which can be activated by the MSP430 once the antenna has been deployed which in turn act as a chip enable for all other circuits onboard. After the deployment is successful, there is no further work for the deployment system and hence the MSP430 can be put into sleep mode which hardly consumes any power. The MOSFET also doesn't drive any power as the nichrome coil is melted and there is no complete electrical path.
In this way, an effective deployment system can be achieved at low power consumption and also less mass to transport the device.

--
Nagaraja

Wednesday, September 1, 2010

Hyper-V

Hello guys, it is time for some new stuff to be posted...

    Today i will be telling you what i know about Hyper-V. Hyper-V is in its full form know as Hypervisor. It is a software which comes inbuilt with Windows Server 2008 operating system, a huge product in the server line of Microsoft which happens to be constructed of course on NT platform..Far more evolved than the primitive Windows Server 2003...So if you were a Windows Server 2000 user, you might not have noticed much changes when you migrated to Windows Server 2003 but it is not the case in Windows Server 2008.....The entire architecture has evolved to a while new level. Talking something about Windows Server 2008, you have nearly 5 major editions some of them like standard, datacenter, enterprise, web edition, Itanium edition.....Hyper V comes in as a pre-installed feature in these versions...There are some versions where you do not have this Hyper V feature ad thus costs less...So depending on your utility, you can purchase the required version of Windows.
    Getting back to Hyper V, as i said earlier, it provides support for a virtualised environment and hence create a virtualisation server. Some of you might be thinking as to what is the need for having a virtual server right....
    Well let me explain you this with an example. Suppose you run a company wherein you have 3 database servers with say 250GB hard drive and 1GB RAM on each server. You plan to upgrade all of these servers to say 3GB RAM and 500GB hard drive capacity. Now you have two options. The first option is the traditional one. Just upgrade all the systems to whatever extent you have planned to. Also note that by doing this you must also improve the cooling mechanisms, system hardware maintenance and space utilisation for the server that you maintain because maintenance overhead comes as a free gift package with the huge benefits a server can provide. Now there is another thing that you will have to consider in mind. Will your servers always use 3GB of RAM or is it only at the peak hours its utilisation is 3GB????Well you cannot supply less RAM and spoil your business at peak hours also you "need not" invest on providing more RAM to servers just for serving the need during peak hours when you have a better option like Hyper V.
   I will explain you how to realise the above 3 servers using a Hyper V virtual server. Microsoft has given more importance to development of virtual servers because of many reasons some of them being the above mentioned. Just have a server with say 1.8 - 2 TB of hard drive space(in these days, hard drive space is not a constraint as they are available very cheaply.....I mean you get 1TB Seagate hard drive in less than 50$ in the Indian markets). You can choose to install RAM of say 10 GB. Why 10 GB????? Well lets say you have 3 servers and at peak time you will be using 3GB on each server totalling to around 9GB of peak time RAM usage...also the server OS needs to live......Well here is another interesting thing......Windows Server 2008 comes with an install option called the Server core installation which is a part of installation choice which the server administrator can make. You cannot get the server core part separately but is an installation option in every version of OS that you buy....Whats new in this????Well lets just say that you don't have the GUI anymore only the age old yet most powerful command prompt....You will only see a blank window with only the command prompt console for your operations....It supports all the features of the server operating system...just comes without the GUI........Unless you are a command line geek, don't use this option....Well you do have an advantage in this...It uses less RAM i.e 384 MB as quoted...Well this is an advantage in fact....
Now that you have installed Windows Server 2008 or R2 which is the second release of Windows Server 2008...the next thing you will have to do is to create virtual machines which happen to be your database servers...You will create 3 such machines and the creation of these virtual machines is exactly similar to installing a new OS....Hyper V supports up to 64 logical CPU's per virtual machine which in fact is quite large support. Now you will have an option to create hard disks where you can create either fixed size wherein you will fix the size of the logical hard disk for the virtual machine ( This amount of hard disk space that you allocate will not be available for any of your work) or be smart and use dynamic hard disks which dynamically increases its size as and when data is added onto the logical hard drive of the virtual machine. You will also be able to reserve logical CPU's to certain virtual machines..Say you have one database server which requires huge processor support compared to other servers, then you can allocate more number of logical CPU's for that particular server....As usual there is network load balancing support that  is present....
Not only this, in the upcoming SP! release of Windows server 2008 R2, there is a feature called dynamic RAM allocation wherein RAM allocation to virtual machines can be done dynamically...This is a huge benefit for that peak hour problem which i told you about...You can reserve a starting nominal RAM say 1.5 GB to start with...Then you will have 4.5 GB allocated for all your 3 servers and will be present there even if the servers do not ned so much of memory( Imagine in non peak hours )....Now you can instruct Hyper V to dynamically increase the RAM from 1.5 GB upto 3  GB as and when the virtual machine's RAM usage crosses a particular percentage....Say you set up a virtual machine with a nominal or fixed RAM of 1.5 GB and provide an instruction to Hyper V stating that the virtual machine can be given a max of 3 GB RAM whenever the RAM free space of that particular VM goes below say 20%. so whenever the VM's RAM usage crosses 80% Hyper V ensures that there is always 20% free RAM space available for that VM...it does so by seeping in some RAM to the virtual machine thereby increasing the total amount of RAM and hence the total free RAM space is maintained constant at 20%...this goes on til the total RAM allocated to that VM reaches 3GB( as in this example) thereafter, it either shuts down or asks you to increase the allocated RAM. I am telling you this because you have a great advantage here...you can not only run 3 database servers simultaneously but can also add on some more VM's onto the virtual server so that you can get more benefits from a single server....Microsoft has also given importance on graphical quality...Generally you cannot see aero theme working on a VM. but in SP1 the new feature called Remote FX enables you to have this high graphical capabilities.....
Now if you wish to move the VM from one virtual server to another, Windows Server 2008 R2 provides what is called as clustered media sharing services...This allows you to transfer VM's from one server to another without any drops in connections during the migration...and yes, this feature is called as "Live Migration"...wherein you migrate VM's from one server to another without drops in the connections that exists to the VM's in the network....

There is a lot more to know about in Hyper V...i would leave that to you...Refer the book on Windows Server 2008 which happens to be the first book in my collection...You will find this in the Books page of my blog.....Happy reading.....