SEMINAR TOPICS

SEMINAR TOPICS-  
                          Here are some seminar topics for computer science and it students, and a brief description of each topics. Click on the link with each topics to retrieve more information about the one and to download pdf and abstract!

Seminar topic on LINUX


1. A network safety defence mechanism with linux security module

         With the increasing popularity of the Internet, more and more applications are extensively exploited on it. In the wake of that, malicious network behavior also occurs more and more frequently. Consequently, a network system must be enhanced to prevent attacks and invasions. Moreover, we usually use a gateway to connect network devices to share resources among them. The security mechanism of a gateway thus becomes an important part for network robustness. In this paper, we present a network security-defense mechanism developed with the Linux Security Modules (LSM) to prohibit ordinary invasive actions, like the Backdoor, Worm, Port Scan, and SYN Flooding attacks. Such a mechanism can be employed on a network gateway to nullify illegal network actions to improve its security.
For abstract and more CLICK HERE!

2. Making the kernel responsible: a new approach to detect and prevent buffer over flow

                This paper takes the stance that the kernel is responsible for preventing user processes from interfering with each other, and the overall secure operation of the system. Part of ensuring overall secure operation of the computer is preventing buffers in memory from having too much data written to them, overflowing them. This paper presents a technique for obtaining the writable bounds of any memory address. A new system call for obtaining these bounds, ptrbounds, is described that implements this technique. The system call was implemented in the Linux 2.4 kernel and can be used to detect most buffer overflow situations. Once an overflow has been detected it can be dealt with in a number of ways, including to limit the amount of information written to the buffer. Also, a method for accurately tracking the allocation of memory on the stack is proposed to enhance the accuracy of the technique. The intended use of ptrbounds is to provide programmers with a method for checking the bounds of pointers before writing data, and to automatically check the bounds of pointers passed to the kernel. 
For abstract and more CLICK HERE


3. delayed processing technique in critical section for real time linux

                  In a real-time Linux system, the critical sections are thought to be one of the main factors causing problems with the start of real-time tasks. Traditional approaches for overcoming this issue either provide less of a guarantee on the worst-case latency time of real-time tasks, or have heavy overhead on normal Linux tasks. In this paper, to guarantee the start time of a real-time task, the execution of a normal Linux task will be delayed, made to wait at the beginning of a critical section, on the assumption that the future execution of this section would lead to an unacceptable delay time in the start of the coming real-time task. In addition to this, to reduce the latency time of the real-time task, a technique is proposed in which hardware interrupts will not be prohibited in the most kernel's critical sections, so the timer interrupt can enter the kernel with no or less delay time. Experimental results showed that the worst-case start latency of a real-time task is reduced to 16.7% of that in Linux 2.6.20, and the penalty to the normal tasks is light, in contrast to traditional approaches. The proposed technique is useful not only for constructing a real-time Linux, but also for developing other real-time systems in which the critical sections are significantly long.
For abstract and more CLICK HERE


4. The design of serial server based on semaphore under linux



                         On the basis of analyzing the disadvantages of a general-purpose method for the server program design, a new method of the serial server program design based on the semaphore in Linux is introduced. Then, an implementation framework proposed by this paper is depicted. The number of the child processes which produced by the server that designed by this methods will no longer have a linear correlation with the cumulative number of the client requests, but only related with the peak value of the clients at a certain time. So this proposed method can remarkably improve the operational efficiency of the server.
For abstract and more CLICK HERE.


5. Implementation on differentiated services on ATM using LINUX



                        The paper describes the concept and the implementation of differentiated services over ATM. ATM components are being used in order to implement DiffServ traffic conditioning components such as shaping and policing. The implementation architecture based on a Linux router platform and some initial performance measurement results are presented.
For abstract and more CLICK HERE.

seminar topic on DATABASES


1.Knowledge discovery in temporal databases


               Knowledge discovery in databases is the process of applying statistical, machine learning and other techniques to conventional database systems. Our survey in knowledge discovery systems has indicated that up to date there is no knowledge discovery system to deal with temporal databases. In this paper, we first give a brief description of temporal database systems and then we present some examples to show how the ORES temporal database management system could provide the necessary functionality to infer accurate and valuable knowledge from temporal databases. In particular, we discuss three common classes of database mining problems involving classifications, associations and sequences. We give a short description of our overall framework for knowledge discovery under research. The work focuses on two areas and their integration: on one side, data mining as a technique to increase the quality of data, and on the other side, temporal databases as a technique to keep the history of data. We believe that their integration will lead to even higher quality data.
For abstract and more CLICK HERE

2. Hybrid approach to software interworking problem


                Interworking problems between software services arise for a number of reasons; they may occur because the services, or their component parts, have evolved to fulfil different roles from the originally intended ones, resulting in conflicting requirements. Alternatively, the services themselves may be undocumented, poorly understood or required to interwork with services from third party legacy systems. Interworking problems are difficult to predict and detect, as well as to resolve in an acceptable manner. The problems are particularly acute in the telecommunications domain with its supplementary concerns of real-time, distributed control and data, high reliability, rapid evolution, and a deregulated market that is encouraging multiple service providers. Approaches to interworking problems may be characterised as being either online or offline, formally or pragmatically/experimentally based. While numerous approaches have been developed there have been very few attempts to combine formally based and online approaches to produce a technique. The research goal is to develop such a technique because experience with other combinations has led to the belief that they are not sufficient to deal with the interworking problems of complex, evolving software systems, as common in telecommunications. This is particularly the case for systems which also have to interwork with third party and legacy code: a hybrid approach which combines both online and formally based approaches promises to address problems which have proven very difficult to resolve with other techniques. The paper outlines a hybrid approach based on a transactional technique with rollback capability.
For abstract and more CLICK HERE

Seminar topics on COMPUTER NETWORKS


1.Temporal aspect of real time system design


                    Describes some of the more important aspects of real-time networks, and shows how such networks can be used to control and analyse the temporal properties of computer-based systems. At the heart of the approach is the characterisation of information flows in terms of a small set of well-defined protocols, and the reflection of these protocols within the functional, design, implementation and execution models for a system. This provides a well-defined and traceable path from functional description through to execution, so allowing temporal requirements within applications to be expressed in a form such that they can be guaranteed (by analysis) to be met within the corresponding network solutions. The paper begins with a brief discussion of network models of computation, before moving on to the communication dynamics which lie at the heart of this network approach. Traceability and interface issues are then considered, followed by aspects of timing analysis. The paper concludes with a small example to illustrate the more important features of the approach.
For abstract and more CLICK HERE

2. Algorithms in distributed load flow computation

                   This paper reports the use of a general technique to combine several different methods to solve complex systems of algebraic equations in the context of load flow calculations of electrical power networks. Such a combinations of methods, referred to as `team algorithms', seem specially well suited to be used with distributed memory computer systems, in an asynchronous environment. Experimental results solving example problems in a commercially available parallel computer system show that a `synergetic effect' with considerable speedup can be obtained using these `team algorithms.
For abstract and more CLICK HERE

3.Signalling in future networks


                      With the convergence of the telecommunications, computer and entertainment industries, telecommunications networks are entering a phase of rapid growth in new services. To provide these services efficiently, the current network will need to evolve. However, no matter how radical this evolution, the need for signalling will remain as strong as ever. The paper discusses the following issues: the possible increase in signalling which may occur if these new services are taken up; the evolution of the signalling network topology; and whether or not the signalling changes should be of concern in networks research today.
For abstract and more CLICK HERE.


4. Neural network and texture classification.

          Texture plays an increasingly important role in computer vision. It has found wide application in remote sensing, medical diagnosis, quality control, food inspection and so forth. Research on texture started in the 1970s. The resurgence of research interest and resulting techniques in artificial neural networks gives rise to a new paradigm for texture analysis. The paper presents an application of a neural network architecture along with its training algorithm-the generating-shrinking algorithm-to texture classification in comparison with the error backpropagation algorithm and the conventional K-nearest neighbour rule. The texture feature sets considered in the paper include the statistical geometrical features and features derived from the two-dimensional discrete Fourier transform via rings and wedges.
For abstract and more CLICK HERE.


Seminar topics on DSP


1. A practical approach to digital signal processing

           Digital signal processing (DSP) now plays an important role in science and engineering because of the widespread use of computers to process, manipulate and store data. This makes it vital that engineering students have a good understanding of the fundamentals of the subject and a practical appreciation of the possibilities it offers. However, DSP is mathematically demanding and this makes it difficult for students to follow the subject. This paper discusses a practical approach we have adopted in Plymouth to the teaching of DSP. This has proved effective in giving the student a perceptive understanding of the fundamentals of DSP and a good appreciation of the power and versatility of DSP processors. An important part of our teaching is a set of stand-alone DSP hardware, developed in house, which have provided useful and inexpensive platforms for demonstrating simple DSP algorithms in real-time and to support final year degree project work. Case studies are used to explore real world problems and the interactions between DSP and other technologies, such as artificial neural networks and satellite communications.
For abstract and more CLICK HERE.

2.Programmable DSP architecture for wireless communication system


                   Programmable solutions for fast mobile communication systems are attracting ever-growing attention due to different and also evolving communication standards. They overcome the shortcomings of ASIC design, by allowing multimode operation, and general-purpose processors by exploiting the inherent data level parallelism in the application. MaRS, a macro-pipelined reconfigurable system, is a domain specific programmable parallel DSP architecture, aimed at harnessing the inherent parallelism in such applications. In this paper, we present the MaRS architecture along with the latest modifications and algorithms that are mapped onto it. We have mapped an IEEE 802.11a WLAN transmitter including a parallel FFT and soft decision Viterbi decoder on MaRS. Our simulation results show that the performance achieved on MaRS meets the stringent timing constraints of the IEEE 802.11a baseband transceiver at its highest rate, with 20% slack, leaving a playground for system level power optimization. Finally, we have mapped the EEMBC telecom suite on MaRS to evaluate and compare our architecture with existing architectures.
For abstract and more CLICK HERE.


Some genereal topics


1. Design and validation of core of pentium3 and pentium4 processors


                   In this paper, we present the design approach and an empirical validation of the power supply decoupling network with particular emphasis on on-die capacitance. The impact of die decoupling on core performance for the 0.18 micron version of the Pentium® 4 has been presented previously (T. Rahal-Arabi et al, VLSI Circ. Symp. Dig. of Tech. Papers, pp. 220-223, 2002). This paper complements the previous work by presenting the design and validation approach for the IO power supply of both the Pentium® III and Pentium® 4 processors. As the Pentium® III processor has separate IO and core supplies, it is a more suitable vehicle for the IO validation. The design approach relies on using the power supply impedance model to determine the required decoupling. The model is widely used in the design of high speed systems (A. Waizman and Chee-Yee Chung, IEEE Conf. Electrical Perf. of Electron. Packaging, pp. 65-68, 2000) but this paper shows that it is less adequate to evaluate performance. The validation approach consists of building several silicon wafers of the Pentium® III and Pentium® 4 processors with various amounts of decoupling. Extensive measurements are then conducted at the silicon, package, and system levels.
For abstract and more CLICK HERE.


2. Real time vitual environment rendering system

     
                 In this paper, we describe a rendering system that is capable of capturing and rendering dynamic environments at about 20 fps using a commercially available PC (1.8 GHz single processor) and IEEE 1394 cameras (each with resolution of 320times240). It does not rely on any additional custom hardware. Such a system permits streaming applications that involve real-time view synthesis from multiple cameras, such as 3D teleconferencing or chatting. View synthesis is performed using the closest two cameras, and relies on plane sweeping to produce a photoconsistent virtual image. In addition, an alpha matte is estimated at the virtual view in order to improve the rendering quality at the foreground/background boundaries. We illustrate the performance of our system on several dynamic environment examples.
For abstract and more CLICK HERE.


3.OS controlled cache predictability for real time systems


               Cache-partitioning techniques have been invented to make modern processors with an extensive cache structure useful in real-time systems where task switches disrupt cache working sets and hence make execution times unpredictable. This paper describes an OS-controlled application-transparent cache-partitioning technique. The resulting partitions can be transparently assigned to tasks for their exclusive use. The major drawbacks found in other cache-partitioning techniques, namely waste of memory and additions on the critical performance path within CPUs, are avoided using memory coloring techniques that do nor require changes within the chips of modern CPUs or on the critical path for performance. A simple filter algorithm commonly used in real-time systems, a matrix-multiplication algorithm and the interaction of both are analysed with regard to cache-induced worst case penalties. Worst-case penalties are determined for different widely-used cache architectures. Some insights regarding the impact of cache architectures on worst-case execution are described.
For abstract and more CLICK HERE.


4.High speed Optical communication through DHBT technology


                   Recently, InP/InGaAs/InP double-heterostructure bipolar transistors (DHBT) have attracted a lot of attention in the realization of high-speed (>40 Gb/s) optical communication systems (G. Raghaven et al., IEEE Spectrum, Oct. 2000; Y. Baeyens et al, IEEE GaAs IC Symp. Tech. Dig., pp. 125-128, 2001; Y.K. Chen et al., IEDM Tech. Dig., 2001, and OFC Tech. Dig., 2002). Much progress has been made to improve the high-speed device performance and fT values as high as 340 GHz have been reported (S. Lee et al, IEEE GaAs IC Symp. Tech. Dig., pp. 185-187, 2001; A. Fujihara et al., IEDM Tech. Dig., 2001; M. Ida et al., ibid., 2001.). However to our knowledge there have been few reports on the reproducibility, yield and robustness of these types of devices. For successful implementation of these devices in high speed ICs, in addition to high fT and fmax, a useful DHBT technology also needs to achieve low turn-on voltage Vce,sat, low knee voltage Vk, high breakdown voltages BVCEO, BVCBO, and on-state breakdown voltage. Furthermore, excellent device yield, high circuit-performance and uniformity are required. Optimization of all these parameters is critical for any given technology to be practically useful. In this paper, we report on a high-yield, high performance InP/InGaAs DHBT process with excellent uniformity and reproducibility.
For abstract and more CLICK HERE.


5.Past present and Future of computer field- a survey


                           The paper surveys some of the landmarks that have been passed as the computer field has developed from its early inception and comments on some of the issues that are now claiming attention. The paper is based on a lecture given by Prof. Wilkes to the IEE on 16th February 1984.
For abstract and more CLICK HERE.

6. Coperative computing and control


                           The pace of innovation in the information technologies continues unabated, and massive investment in the digital infrastructure required for distributed information systems is proceeding steadily. The paper summarises some of the basic technology trends of the past five years and describes some of the consequent systems trends, including workstation networks, open system standards, digital communications and computer supported co-operative work. A key characteristic of most of these developments is the need for systems that support co-operation between people, between people and machines, between machines, and between organisations. Some examples of these are discussed including X400 electronic mail, CCITT SS7 separate channel signalling and electronic conferencing. Some of the newer disciplines required to support an engineering approach to building very large distributed information systems are then described, including protocol engineering, distributed systems architecture, object-oriented design and system control and management. The paper concludes by reviewing some of the issues these technologies raise for the engineering professional and the role of the IEE in the 1990s.
For abstract and more CLICK HERE.


Papers outside IEEE(Some external links)


Computer Science/IT seminar topis

        1. 3D Searching
        2.Biological Computers
        4.GPS
        5.BitTorrent
        6.Cooperative Linux
        7.Wireless LAN Security
        8.Linux Virtual Server
        9.Wireless USB
       10. Thermography
       12.64-Bit Computing
       13.4G Wireless Systems
       14.Storage Area Network
       15.Gigabit Ethernet



SOME WEBSITES FOR  SEMINAR TOPICS

      1.  www.101seminartopics.com
      2.  www.bestneo.com
      3.  www.seminartopics.in
      4. www.seminarprojects.com
      5. www.seminarsonly.com