Translate

Thursday 26 December 2013

Click photos at speed of light with this camera.

An inexpensive multi-purpose 'nano-camera' that can operate at the speed of light has been developed by a team of MIT researchers, including Indian-origin scientists. 

The $ 500 camera could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming. 

The three-dimensional camera was developed by researchers in the Massachusetts Institute of Technology Media Lab. 

The camera is based on "Time of Flight" technology in which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor. 

However, unlike existing devices based on this technology, the new camera is not fooled by rain, fog, or even translucent objects, said co-author Achuta Kadambi. 

"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," said Kadambi, a graduate student at MIT. 

"That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3D models of translucent or near-transparent objects," Kadambi added. 

In a conventional Time of Flight camera, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel. 

Since the speed of light is known, it is simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from. 

The new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, said Ramesh Raskar, an associate professor of media arts and sciences. 

Raskar was the leader of the Camera Culture group within the Media Lab, which developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand. 

In 2011 Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene. 

The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive laboratory-grade optical equipment to take an image each time. This "femto-camera" costs around $ 500,000 to build. 

In contrast, the new " nano-camera" probes the scene with a continuous-wave signal that oscillates at nanosecond periods. 

This allows the team to use inexpensive hardware -- off-the-shelf light-emitting diodes (LEDs) can strobe at nanosecond periods, for example - meaning the camera can reach a time resolution within one order of magnitude of femtophotography while costing just $500.

Friday 6 December 2013

Now Android will soon see its rival Tizen.

Most mobile phone users have never heard of Tizen. Neither have car owners or anyone with a fridge.
Samsung wants to change that.

The South Korean electronics giant is in a quiet push to make its Tizen operating system a part of the technology lexicon as familiar as Google's Androidor Apple's iOS. Its ambition doesn't stop there. Samsung sees the software in your car, fridge and television too.
The first developer conference in Asia for Tizen wrapped up on 12th November after a two-day run, bringing together app developers and Tizen backers from Samsung, Intel and mobile operators.
Samsung did not announce a Tizen device, but it made a pitch for developers to create apps for the mobile operating system that is yet to be seen in the market. Samsung promised to give out $4 million cash to the creators of the best Tizen apps.
Samsung supplied about one third of the smartphones sold worldwide in the third quarter, nearly all of them running on Google's Android. Its early bet on Google's free-of-charge operating system served Samsung well and the company's rise to top smartphone seller also helped Android become the most used mobile platform in the world. According to Localytics, 63% of all Android mobile devices in use are made by Samsung.
But while Samsung was wildly successful with selling its Galaxy phones and tablets, it had little success in locking Galaxy device users into music, messaging and other Samsung services. Google, however, benefited from more people using its search service, Google Play app and other Google mobile applications on Galaxy smartphones. Owners of Galaxy devices remain for the most part a slave to Google's Android update schedule and its rules.
About nine in every 10 smartphone users are tied to either Google's Android or Apple's iPhone ecosystems, generating profit for Google and Apple every time they purchase a game or application on their smartphone.
That is partly why Samsung wants to expand its control beyond hardware to software, by building its own mobile operating system.
"With only hardware, its influence is limited,'' said Kang Yeen-kyu, an associate research fellow at state-run Korea Information Society Development Institute. "Samsung's goal is to establish an ecosystem centered on Samsung.''
The consolidation of global technology companies in the last few years reflects such trends. Apple has always made its own operating system for the iPhone. Google Inc. acquired Motorola Mobility in 2011 and Microsoft announced in September its plan to buy Nokia, leaving Samsung the only major player in the smartphone market that does not make its own operating system.
Samsung executives told analysts last week that the company plans to beef up its software competitiveness through acquisitions and splashing cash on the development of mobile content and services.
But Tizen's start appears bumpy. Samsung said earlier this year the first Tizen phone would hit the market this fall but it has not materialized. Samsung declined to comment on release schedules.

Saturday 31 August 2013

IVR

IVR system (Interactive Voice Response System)
You know how the technology make our life’s very simple now a days  very thing is available on our figure tips .You press the some key from your phone and you will get the all the information which you needed exactly .
Actually it is a system which reduces overall human efforts the technology behind this is called IVR system.
Now its time to move out in some deep intro of this system.
Introduction
VR systems are an example of computer-telephone integration (CTI). The most common way for a phone to communicate with a computer is through the tones generated by each key on the telephone keypad. These are known as dual-tone multi-frequency (DTMF) signals.
Each number key on a telephone emits two simultaneous tones: one low-frequency and one high-frequency. The number one, for example, produces both a 697-Hz and a 1209-Hz tone that's universally interpreted by the public switched telephone network as a "1."
A computer needs special hardware called a telephony board or telephony card to understand the DTMF signals produced by a phone. A simple IVR system only requires a computer hooked up to a phone line through a telephony board and some inexpensive IVR software. The IVR software allows you to pre-record greetings and menu options that a caller can select using his telephone keypad.
More advanced IVR systems include speech-recognition software that allows a caller to communicate with a computer using simple voice commands. Speech recognition software has become sophisticated enough to understand names and long strings of numbers -- perhaps a credit card or flight number.
On the other end of the phone call, an organization can employ text-to-speech (TTS) software to fully automate its outgoing messages. Instead of recording all of the possible responses to a customer query, the computer can generate customized text-like account balances or flight times and read it back to the customer using an automated voice.


How IVR appear to end user

Using an IVR system actually in real you are talking with a automated computer system .
Many of today's most advanced IVR systems are based on a special programming language called voice extensible markup language (vxml). Here are the basic components of a VXML-based IVR system:
  1. ·          Telephone network -- Incoming and outgoing phone calls are routed through the regular Public Switched Telephone Network (PSTN) or over a VoIP network.
  2. ·         TCP/IP network -- A standard Internet network, like the ones that provide Internet and intranet connectivity in an office.
  3. ·         VXML telephony server -- This special server sits between the phone network and the Internet network. It serves as an interpreter, or gateway, so that callers can interface with the IVR software and access information on databases. The server also contains the software that controls functions like text-to-speech, voice recognition and DTMF recognition.
  4. ·         Web/application server -- This is where the IVR software applications live. There might be several different applications on the same server: one for customer service, one for outgoing sales calls, one for voice-to-text transcription. All of these application are written in VXML. The Web/application server is connected to the VXML telephony server over the TCP/IP network.
  5. ·         Databases -- Databases contain real-time information that can be accessed by the IVR applications. If you call your credit card company and want to know your current balance, the IVR application retrieves the current balance total from a database. Same for flight arrival times, movie times, et cetera. One or more databases can be linked to the Web/application server over the TCP/IP network.


IVR system
Lets take a advance look on the call flow system inside the IVR system


call flow inside IVR


once you dial some assisted number from your phone provided by your service provider or company . You  are prompted  for to give some input from your phone using touchpad key 
once you press some number it send some DTMF(will describe this section in my next post ) signal to system. these numbers are mapped with some service application inside the IVR system. once you enter the right number your call automatically divert to that match service or else you are again prompted to input correct number. 


Sunday 25 August 2013

Inside Linux

Linux booting process :

Here in this figure you can take the overview of  Linux booting in a PC or a customize embedded hardware 
booting sequence
Now its time to go some deeper inside booting process-

System startup

When a system is first booted, or is reset, the processor executes code at a well-known location. In a personal computer (PC), this location is in the basic input/output system (BIOS), which is stored in flash memory on the motherboard. The central processing unit (CPU) in an embedded system invokes the reset vector to start a program at a known address in flash/ROM. In either case, the result is the same. Because PCs offer so much flexibility, the BIOS must determine which devices are candidates for boot. We'll look at this in more detail later.
When a boot device is found, the first-stage boot loader is loaded into RAM and executed. This boot loader is less than 512 bytes in length (a single sector), and its job is to load the second-stage boot loader.
When the second-stage boot loader is in RAM and executing, a splash screen is commonly displayed, and Linux and an optional initial RAM disk (temporary root file system) are loaded into memory. When the images are loaded, the second-stage boot loader passes control to the kernel image and the kernel is decompressed and initialized. At this stage, the second-stage boot loader checks the system hardware, enumerates the attached hardware devices, mounts the root device, and then loads the necessary kernel modules. When complete, the first user-space program (init) starts, and high-level system initialization is performed.
That's Linux boot in a nutshell. Now let's dig in a little further and explore some of the details of the LinThe secondary, or second-stage, boot loader could be more aptly called the kernel loader. The task at this stage is to load the Linux kernel and optional initial RAM disk.

GRUB stage boot loaders

The /boot/grub directory contains the stage1,stage1.5, and stage2 boot loaders, as well as a number of alternate loaders (for example, CR-ROMs use the iso9660_stage_1_5).
The first- and second-stage boot loaders combined are called Linux Loader (LILO) or GRand Unified Bootloader (GRUB) in the x86 PC environment. Because LILO has some disadvantages that were corrected in GRUB, let's look into GRUB. (See many additional resources on GRUB, LILO, and related topics in the Resourcessection later in this article.)
The great thing about GRUB is that it includes knowledge of Linux file systems. Instead of using raw sectors on the disk, as LILO does, GRUB can load a Linux kernel from an ext2 or ext3 file system. It does this by making the two-stage boot loader into a three-stage boot loader. Stage 1 (MBR) boots a stage 1.5 boot loader that understands the particular file system containing the Linux kernel image. Examples include reiserfs_stage1_5 (to load from a Reiser journaling file system) or e2fs_stage1_5 (to load from an ext2 or ext3 file system). When the stage 1.5 boot loader is loaded and running, the stage 2 boot loader can be loaded.
With stage 2 loaded, GRUB can, upon request, display a list of available kernels (defined in /etc/grub.conf, with soft links from /etc/grub/menu.lst and /etc/grub.conf). You can select a kernel and even amend it with additional kernel parameters. Optionally, you can use a command-line shell for greater manual control over the boot process. With the second-stage boot loader in memory, the file system is consulted, and the default kernel image and initrd image are loaded into memory. With the images ready, the stage 2 boot loader invokes the kernel image.ux boot process.

MBR

MBR stands for Master Boot Record.
It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes.
It contains information about GRUB (or LILO in old systems).
So, in simple terms MBR loads and executes the GRUB boot loader.

GRUB

GRUB stands for Grand Unified Bootloader.
If you have multiple kernel images installed on your system, you can choose which one to be executed.
GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5PAE)
          root (hd0,0)
          kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
          initrd /boot/initrd-2.6.18-194.el5PAE.img
As you notice from the above info, it contains kernel and initrd image.
So, in simple terms GRUB just loads and executes Kernel and initrd images.


Kernel

With the kernel image in memory and control given from the stage 2 boot loader, the kernel stage begins. The kernel image isn't so much an executable kernel, but a compressed kernel image. Typically this is a zImage (compressed image, less than 512KB) or a bzImage (big compressed image, greater than 512KB), that has been previously compressed with zlib. At the head of this kernel image is a routine that does some minimal amount of hardware setup and then decompresses the kernel contained within the kernel image and places it into high memory. If an initial RAM disk image is present, this routine moves it into memory and notes it for later use. The routine then calls the kernel and the kernel boot begins

 Mounts the root file system as specified in the “root=” in grub.conf
 Kernel executes the /sbin/init program
Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1.
Do a ‘ps -ef | grep init’ and check the pid.
 initrd stands for Initial RAM Disk.
 initrd is used by kernel as temporary root file system until kernel is booted and the real root
file system is mounted. It also contains necessary drivers compiled inside, which helps it to
access the hard drive partitions, and other hardware.

InIt

After the kernel is booted and initialized, the kernel starts the first user-space application. This is the first program invoked that is compiled with the standard C library. Prior to this point in the process, no standard C applications have been executed.
In a desktop Linux system, the first application started is commonly /sbin/init. But it need not be. Rarely do embedded systems require the extensive initialization provided by init (as configured through /etc/inittab). In many cases, you can invoke a simple shell script that starts the necessary embedded applications.
.
 Looks at the /etc/inittab file to decide the Linux run level.
 Following are the available run levels
  §  0 – halt
§  1 – Single user mode
§  2 – Multiuser, without NFS
§  3 – Full multiuser mode
§  4 – unused
§  5 – X11
§  6 – reboot
  Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate
 program.
  Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
  If you want to get into trouble, you can set the default run level to 0 or 6. Since you know
 what 0 and 6 means, probably you might not do that.
 Typically you would set the default run level to either 3 or 5.

Runlevel

 When the Linux system is booting up, you might see various services getting started. For
example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed
from the run level directory as defined by your run level.
Depending on your default init level setting, the system will execute the programs from one of
the following directories.

§  Run level 0 – /etc/rc.d/rc0.d/
§  Run level 1 – /etc/rc.d/rc1.d/
§  Run level 2 – /etc/rc.d/rc2.d/
§  Run level 3 – /etc/rc.d/rc3.d/
§  Run level 4 – /etc/rc.d/rc4.d/
§  Run level 5 – /etc/rc.d/rc5.d/
§  Run level 6 – /etc/rc.d/rc6.d/

  Please note that there are also symbolic links available for these directory under /etc directly.
 So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
  Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
  Programs starts with S are used during startup. S for startup.
  Programs starts with K are used during shutdown. K for kill.
  There are numbers right next to S and K in the program names. Those are the sequence
 number in which the programs should be started or killed.
 For example, S12syslog is to start the syslog deamon, which has the sequence number of 12.  S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So,  syslog program will be started before sendmail.

Saturday 24 August 2013

Issue with dynamic memory allocation in embedded systems

In C and C++, it can be very convenient to allocate and de-allocate blocks of memory as and when needed. This is certainly standard practice in both languages and almost unavoidable in C++. However, the handling of such dynamic memory can be problematic and inefficient. For desktop applications, where memory is freely available, these difficulties can be ignored. For embedded - generally real time - applications, ignoring the issues is not an option.
Dynamic memory allocation tends to be nondeterministic; the time taken to allocate memory may not be predictable and the memory pool may become fragmented, resulting in unexpected allocation failures. In this session the problems will be outlined in detail and an approach to deterministic dynamic memory allocation detailed.

Memory Layout in C
C/C++ Memory Spaces
It may be useful to think in terms of data memory in C and C++ as being divided into three separate spaces:
Static memory. This is where variables, which are defined outside of functions, are located. The keyword static does not generally affect where such variables are located; it specifies their scope to be local to the current module. Variables that are defined inside of a function, which are explicitly declared static, are also stored in static memory. Commonly, static memory is located at the beginning of the RAM area. The actual allocation of addresses to variables is performed by the embedded software development toolkit: a collaboration between the compiler and the linker. Normally, program sections are used to control placement, but more advanced techniques, like Fine Grain Allocation, give more control. Commonly, all the remaining memory, which is not used for static storage, is used to constitute the dynamic storage area, which accommodates the other two memory spaces.
Automatic variables. Variables defined inside a function, which are not declared static, are automatic. There is a keyword to explicitly declare such a variable – auto – but it is almost never used. Automatic variables (and function parameters) are usually stored on the stack. The stack is normally located using the linker. The end of the dynamic storage area is typically used for the stack. Compiler optimizations may result in variables being stored in registers for part or all of their lifetimes; this may also be suggested by using the keyword register.
The heap. The remainder of the dynamic storage area is commonly allocated to the heap, from which application programs may dynamically allocate memory, as required.

Dynamic Memory in C
In C, dynamic memory is allocated from the heap using some standard library functions. The two key dynamic memory functions are malloc() and free().
The malloc() function takes a single parameter, which is the size of the requested memory area in bytes. It returns a pointer to the allocated memory. If the allocation fails, it returns NULL. The prototype for the standard library function is like this:
          void *malloc(size_t size);
The free() function takes the pointer returned by malloc() and de-allocates the memory. No indication of success or failure is returned. The function prototype is like this:
          void free(void *pointer);
To illustrate the use of these functions, here is some code to statically define an array and set the fourth element’s value:
         int my_array[10];
         my_array[3] = 99;
The following code does the same job using dynamic memory allocation:
         int *pointer;
         pointer = malloc(10 * sizeof(int));
         *(pointer+3) = 99;
The pointer de-referencing syntax is hard to read, so normal array referencing syntax may be used, as [ and ] are just operators:
          pointer[3] = 99;
When the array is no longer needed, the memory may be de-allocated thus:
       free(pointer);
       pointer = NULL;
Assigning NULL to the pointer is not compulsory, but is good practice, as it will cause an error to be generated if the pointer is erroneous utilized after the memory has been de-allocated.
The amount of heap space actually allocated by malloc() is normally one word larger than that requested. The additional word is used to hold the size of the allocation and is for later use by free(). This “size word” precedes the data area to which malloc() returns a pointer.
There are two other variants of the malloc() function: calloc() and realloc().
The calloc() function does basically the same job as malloc(), except that it takes two parameters – the number of array elements and the size of each element – instead of a single parameter (which is the product of these two values). The allocated memory is also initialized to zeros. Here is the prototype:
          void *calloc(size_t nelements, size_t elementSize);
The realloc() function resizes a memory allocation previously made by malloc(). It takes as parameters a pointer to the memory area and the new size that is required. If the size is reduced, data may be lost. If the size is increased and the function is unable to extend the existing allocation, it will automatically allocate a new memory area and copy data across. In any case, it returns a pointer to the allocated memory. Here is the prototype:
void *realloc(void *pointer, size_t size);
Dynamic Memory in C++
Management of dynamic memory in C++ is quite similar to C in most respects. Although the library functions are likely to be available, C++ has two additional operators – new and delete – which enable code to be written more clearly, succinctly and flexibly, with less likelihood of errors. The new operator can be used in three ways:
        p_var = new typename;
        p_var = new type(initializer);
        p_array = new type [size];
In the first two cases, space for a single object is allocated; the second one includes initialization. The third case is the mechanism for allocating space for an array of objects.
The delete operator can be invoked in two ways:
          delete p_var;
          delete[] p_array;
The first is for a single object; the second deallocates the space used by an array. It is very important to use the correct de-allocator in each case.
There is no operator that provides the functionality of the C realloc() function.
Here is the code to dynamically allocate an array and initialize the fourth element:
      int* pointer;
      pointer = new int[10];
      pointer[3] = 99;
Using the array access notation is natural. De-allocation is performed thus:
      delete[] pointer;
      pointer = NULL;
Again, assigning NULL to the pointer after deallocation is just good programming practice. Another option for managing dynamic memory in C++ is the use the Standard Template Library. This may be inadvisable for real time embedded systems.
Issues and Problems
As a general rule, dynamic behavior is troublesome in real time embedded systems. The two key areas of concern are determination of the action to be taken on resource exhaustion and nondeterministic execution performance.
There are a number of problems with dynamic memory allocation in a real time system. The standard library functions (malloc() and free()) are not normally reentrant, which would be problematic in a multithreaded application. If the source code is available, this should be straightforward to rectify by locking resources using RTOS facilities (like a semaphore). A more intractable problem is associated with the performance of malloc(). Its behavior is unpredictable, as the time it takes to allocate memory is extremely variable. Such nondeterministic behavior is intolerable in real time systems.
Without great care, it is easy to introduce memory leaks into application code implemented using malloc() and free(). This is caused by memory being allocated and never being deallocated. Such errors tend to cause a gradual performance degradation and eventual failure. This type of bug can be very hard to locate.
Memory allocation failure is a concern. Unlike a desktop application, most embedded systems do not have the opportunity to pop up a dialog and discuss options with the user. Often, resetting is the only option, which is unattractive. If allocation failures are encountered during testing, care must be taken with diagnosing their cause. It may be that there is simply insufficient memory available – this suggests various courses of action. However, it may be that there is sufficient memory, but not available in one contiguous chunk that can satisfy the allocation request. This situation is called memory fragmentation.
Memory Fragmentation
The best way to understand memory fragmentation is to look at an example. For this example, it is assumed hat there is a 10K heap. First, an area of 3K is requested, thus:
         #define K (1024)
         char *p1;
         p1 = malloc(3*K);
Then, a further 4K is requested:
        p2 = malloc(4*K);
3K of memory is now free.
Some time later, the first memory allocation, pointed to by p1, is de-allocated:
        free(p1);
This leaves 6K of memory free in two 3K chunks. A further request for a 4K allocation is issued:
       p1 = malloc(4*K);
This results in a failure – NULL is returned into p1 – because, even though 6K of memory is available, there is not a 4K contiguous block available. This is memory fragmentation.
It would seem that an obvious solution would be to de-fragment the memory, merging the two 3K blocks to make a single one of 6K. However, this is not possible because it would entail moving the 4K block to which p2 points. Moving it would change its address, so any code that has taken a copy of the pointer would then be broken. In other languages (such as Visual Basic, Java and C#), there are defragmentation (or “garbage collection”) facilities. This is only possible because these languages do not support direct pointers, so moving the data has no adverse effect upon application code. This defragmentation may occur when a memory allocation fails or there may be a periodic garbage collection process that is run. In either case, this would severely compromise real time performance and determinism.
Memory with an RTOS
A real time operating system may provide a service which is effectively a reentrant form of malloc(). However, it is unlikely that this facility would be deterministic.
Memory management facilities that are compatible with real time requirements – i.e. they are deterministic – are usually provided. This is most commonly a scheme which allocates blocks – or “partitions” – of memory under the control of the OS.
Block/partition Memory Allocation
Typically, block memory allocation is performed using a “partition pool”, which is defined statically or dynamically and configured to contain a specified number of blocks of a specified fixed size. For Nucleus OS, the API call to define a partition pool has the following prototype:
  STATUS
   NU_Create_Partition_Pool (NU_PAR TITION_POOL *pool, CHAR *name, VOID *start_address, UNSIGNED pool_size, UNSIGNED partition_size, OPTION suspend_type);
This is most clearly understood by means of an example:
   status = NU_Create_Partition_Pool(&MyPoo l, "any name", (VOID *) 0xB000, 2000, 40, NU_FIFO);
This creates a partition pool with the descriptor MyPool, containing 2000 bytes of memory, filled with partitions of size 40 bytes (i.e. there are 50 partitions). The pool is located at address 0xB000. The pool is configured such that, if a task attempts to allocate a block, when there are none available, and it requests to be suspended on the allocation API call, suspended tasks will be woken up in a first-in, first-out order. The other option would have been task priority order.
Another API call is available to request allocation of a partition. Here is an example using Nucleus OS:
      status = NU_Allocate_Partition(&MyPool, &ptr, NU_SUSPEND);
This requests the allocation of a partition from MyPool. When successful, a pointer to the allocated block is returned in ptr. If no memory is available, the task is suspended, because NU_SUSPEND was specified; other options, which may have been selected, would have been to suspend with a timeout or to simply return with an error.
When the partition is no longer required, it may be de-allocated thus:
      status = NU_Deallocate_Partition(ptr);
If a task of higher priority was suspended pending availability of a partition, it would now be run. There is no possibility for fragmentation, as only fixed size blocks are available. The only failure mode is true resource exhaustion, which may be controlled and contained using task suspend, as shown.
Additional API calls are available which can provide the application code with information about the status of the partition pool – for example, how many free partitions are currently available. Care is required in allocating and de-allocating partitions, as the possibility for the introduction of memory leaks remains.
Memory Leak Detection
The potential for programmer error resulting in a memory leak when using partition pools is recognized by vendors of real time operating systems. Typically, a profiler tool is available which assists with the location and rectification of such bugs.
Real Time Memory Solutions
Having identified a number of problems with dynamic memory behavior in real time systems, some possible solutions and better approaches can be proposed.
Dynamic Memory
It is possible to use partition memory allocation to implement malloc() in a robust and deterministic fashion. The idea is to define a series of partition pools with block sizes in a geometric progression; e.g. 32, 64, 128, 256 bytes. A malloc() function may be written to deterministically select the correct pool to provide enough space for a given allocation request. This approach takes advantage of the deterministic behavior of the partition allocation API call, the robust error handling (e.g. task suspend) and the immunity from fragmentation offered by block memory.
Conclusions
C and C++ use memory in various ways, both static and dynamic. Dynamic memory includes stack and heap.
Dynamic behavior in embedded real time systems is generally a source of concern, as it tends to be non-deterministic and failure is hard to contain.
Using the facilities provided by most real time operating systems, a dynamic memory facility may be implemented which is deterministic, immune from fragmentation and with good error handling.