IEEE PROJECT ON NETWORKING

Node Isolation Model and Age-Based Neighbor Selection in
Unstructured P2P Networks
Abstract: Previous analytical studies of unstructured P2P resilience have
assumed exponential user lifetimes and only considered age-independent neighbor
replacement. In this paper, we overcome these limitations by introducing a
general node-isolation model for heavy-tailed user lifetimes and arbitrary
neighbor-selection algorithms. Using this model, we analyze two age-biased
neighbor-selection strategies and show that they significantly improve the residual
lifetimes of chosen users, which dramatically reduces the probability of user
isolation and graph partitioning compared with uniform selection of neighbors. In
fact, the second strategy based on random walks on age-proportional graphs
demonstrates that, for lifetimes with infinite variance, the system monotonically
increases its resilience as its age and size grow. Specifically, we show that the
probability of isolation converges to zero as these two metrics tend to infinity. We
finish the paper with simulations in finite-size graphs that demonstrate the effect
of this result in practice.
Java/.Net



Design Optimization of the Peta web Architecture
Abstract: This paper explores the design modeling issues of the Peta web, an
optical network architecture that provides fully meshed connectivity between
electronic edge nodes. The Peta web is simple to manage simplifies key
networking functions such as routing and addressing and can offer a total capacity
of several Peta bits per second. From the topology standpoint, it is an unusual
structure as the backbone nodes are totally disconnected whereas the edge nodes
are all attainable in one-hop. The network design problem leads to a very hard
combinatorial problem. We propose a model and a heuristic approach that is
based on repeated matching. Computational results concerning the modeling
issues will be presented and thoroughly discussed.


Spatio-Temporal Network Anomaly Detection by Assessing
Deviations of Empirical Measures
Abstract: We introduce an Internet traffic anomaly detection mechanism based
on large deviations results for empirical measures. Using past traffic traces we
characterize network traffic during various time-of-day intervals, assuming that it
is anomaly-free. We present two different approaches to characterize traffic: (i) a
model-free approach based on the method of types and Sanov’s theorem, and (ii) a
model-based approach modeling traffic using a Markov modulated process. Using
these characterizations as a reference we continuously monitor traffic and employ
large deviations and decision theory results to “compare” the empirical measure
of the monitored traffic with the corresponding reference characterization, thus,
identifying traffic anomalies in real-time. Our experimental results show that
applying our methodology (even short-lived) anomalies are identified within a
small number of observations. Throughout, we compare the two approaches
presenting their advantages and disadvantages to identify and classify temporal
network anomalies. We also demonstrate how our framework can be used to
monitor traffic from multiple network elements in order to identify both spatial
and temporal anomalies. We validate our techniques by analyzing real traffic
traces with time-stamped anomalies.


A Simple and Efficient Hidden Markov Model Scheme for
Host-Based Anomaly Intrusion Detection
Abstract: Extensive research activities have been observed on network-based
intrusion detection systems (IDSS). However, there are always some attacks that
penetrate traffic profiling- based network IDSS. These attacks often cause very
serious damages such as modifying host critical files. A host-based anomaly IDS
is an effective complement to the network IDS in addressing this issue. This
article proposes a simple data preprocessing approach to speed up a hidden
Markov model (HMM) training for system-call-based anomaly intrusion
detection. Experiments based on a public database demonstrate that this data
preprocessing approach can reduce training time by up to 50 percent with
unnoticeable intrusion detection performance degradation, compared to a
conventional batch HMM training scheme. More than 58 percent data reduction
has been observed compared to our prior incremental HMM training scheme.
Although this maximum gain incurs more degradation of false alarm rate
performance, the resulting performance is still reasonable.

Monitoring the Application-Layer DDOS Attacks for
Popular Websites
Abstract: Distributed denial of service (DDOS) attack is a continuous critical
threat to the Internet. Derived from the low layers, new application-layer-based
DDOS attacks utilizing legitimate HTTP requests to overwhelm victim resources
are more undetectable. The case may be more serious when such attacks mimic or
occur during the flash crowd event of a popular Website. Focusing on the
detection for such new DDOS attacks, a scheme based on document popularity is
introduced. An Access Matrix is defined to capture the spatial-temporal patterns
of a normal flash crowd. Principal component analysis and independent
component analysis are applied to abstract the multidimensional Access Matrix. A
novel anomaly detector based on hidden semi-Markov model is proposed to
describe the dynamics of Access Matrix and to detect the attacks. The entropy of
document popularity fitting to the model is used to detect the potential
application-layer DDOS attacks. Numerical results based on real Web traffic data
are presented to demonstrate the effectiveness of the proposed method.

DDOS-Shield: DDOS-Resilient Scheduling to Counter
Application Layer Attacks
Abstract: Countering distributed denial of service (DDOS) attacks is becoming
ever more challenging with the vast resources and techniques increasingly
available to attackers. In this paper, we consider sophisticated attacks that are
protocol-compliant, non-intrusive, and utilize legitimate application-layer requests
to overwhelm system resources. We characterize application-layer resource
attacks as either request flooding, asymmetric, or repeated one-shot, on the basis
of the application workload parameters that they exploit. To protect servers from
these attacks, we propose a counter-mechanism namely DDOS Shield that consists
of a suspicion assignment mechanism and a DDOS-resilient scheduler. In contrast
to prior work, our suspicion mechanism assigns a continuous value as opposed to
a binary measure to each client session, and the scheduler utilizes these values to
determine if and when to schedule a session’s requests. Using testbed experiments
on a web application, we demonstrate the potency of these resource attacks and
evaluate the efficacy of our counter-mechanism. For instance, we mount an
asymmetric attack which overwhelms the server resources, increasing the
response time of legitimate clients from 0.3 seconds to 40 seconds. Under the
same attack scenario, DDOS Shield improves the victims’ performance to 1.5
seconds.


Plexus: A Scalable Peer-to-Peer Protocol Enabling Efficient
Subset Search
Abstract: Efficient discovery of information, based on partial knowledge, is a
challenging problem faced by many large scale distributed systems. This paper
presents Plexus, a peer-to-peer search protocol that provides an efficient
mechanism for advertising a bit sequence (pattern), and discovering it using any
subset of its 1-bits. A pattern (e.g., Bloom filter) summarizes the properties (e.g.,
keywords, service description) associated with a shared object (e.g., document,
service). Plexus has a partially decentralized architecture involving super peers. It
adopts a novel structured routing mechanism derived from the theory of Error
Correcting Codes (ECC). Plexus achieves better resilience to peer failure by
utilizing replication and redundant routing paths. Routing efficiency in Plexus
scales logarithmically with the number of super peers. The concept presented in
this paper is supported with theoretical analysis, and simulation results obtained
from the application of Plexus to partial keyword search utilizing the extended
Golay code.


The Design Trade-Offs of Bit Torrent-Like File Sharing
Protocols
Abstract: The Bit Torrent (BT) file sharing protocol is very popular due to its
scalability property and the built-in incentive mechanism to reduce free-riding.
However, in designing such P2P file sharing protocols, there is a fundamental
trade-off between keeping fairness and providing good performance. In particular,
the system can either keep peers (especially those resourceful ones) in the system
for as long as possible so as to help the system to achieve better performance, or
allow more resourceful peers to finish their download as quickly as possible so as
to achieve fairness. The current BT protocol represents only one possible
implementation in this whole design space. The objective of this paper is to
characterize the design space of BT-like protocols. The rationale for considering
fairness in the P2P file sharing context is to use it as a measure of willingness to
provide service. We show that there is a wide range of design choices, ranging
from optimizing the performance of file download time, to optimizing the overall
fairness measure. More importantly, we show that there is a simple and easily
implement able design knob so that the system can operate at a particular point in
the design space. We also discuss different algorithms, ranging from centralized
to distributed, in realizing the design knob. Performance evaluations are carried
out, both via simulation and network measurement, to quantify the merits and
properties of the BT-like file sharing protocols.

Projective Cone Scheduling (PCS) Algorithms for Packet
Switches of Maximal Throughput
Abstract: We study the (generalized) packet switch scheduling problem, where
service configurations are dynamically chosen in response to queue backlogs, so
as to maximize the throughput without any knowledge of the long term traffic
load. Service configurations and traffic traces are arbitrary.
First, we identify a rich class of throughput-optimal linear controls, which choose
the service configuration maximizing the projection when the backlog is
X. The matrix B is arbitrarily fixed in the class of positive-definite, symmetric
matrices with negative or zero off-diagonal elements. In contrast, positive offdiagonal
elements may drive the system unstable, even for sub critical loads. The
associated rich Euclidian geometry of projective cones is explored (hence the
name projective cone scheduling PCS). The maximum-weight-matching (MWM)
rule is seen to be a special case, where B is the identity matrix.
Second, we extend the class of throughput maximizing controls by identifying a
tracking condition which allows applying PCS with any bounded time-lag without
compromising throughput. It enables asynchronous or delayed PCS
implementations and various examples are discussed.

Residual-Based Estimation of Peer and Link Lifetimes in
P2P Networks
Abstract: Existing methods of measuring lifetimes in P2P systems usually rely
on the so-called Create-Based Method (CBM), which divides a given observation
window into two halves and samples users “created” in the first half every Δ time
units until they die or the observation period ends. Despite its frequent use, this
approach has no rigorous accuracy or overhead analysis in the literature. To shed
more light on its performance, we first derive a model for CBM and show that
small window size or large Δ may lead to highly inaccurate lifetime distributions.
We then show that create based sampling exhibits an inherent tradeoff between
overhead and accuracy, which does not allow any fundamental improvement to
the method. Instead, we propose a completely different approach for sampling
user dynamics that keeps track of only residual lifetimes of peers and uses a
simple renewal-process model to recover the actual lifetimes from the observed
residuals. Our analysis indicates that for reasonably large systems, the proposed
method can reduce bandwidth consumption by several orders of magnitude
compared to prior approaches while simultaneously achieving higher accuracy.
We finish the paper by implementing a two-tier Gnutella network crawler
equipped with the proposed sampling method and obtain the distribution of ultra
peer lifetimes in a network of 6.4 million users and 60 million links. Our
experimental results show that ultra peer lifetimes are Pareto with shape α ≈1.1;
however, link lifetimes exhibit much lighter tails with α ≈1.8.

On Understanding Transient Inter domain Routing Failures
Abstract: The convergence time of the inter domain routing protocol, BGP, can
last as long as 30 minutes. Yet, routing behavior during BGP route convergence is
poorly understood. During route convergence, an end-to-end Internet path can
experience a transient loss of reach ability. We refer to this loss of reach ability as
transient routing failure. Transient routing failures can lead to packet losses, and
prolonged packet loss bursts can make the performance of applications such as
Voice-over-IP and interactive games unacceptable. In this paper, we study how
routing failures can occur in the Internet. With the aid of a formal model that
captures transient failures of the inter domain routing protocol, we derive the
sufficient conditions that transient routing failures could occur. We further study
transient routing failures in typical BGP systems where commonly used routing
policies are applied. Network administrators can apply our analysis to improve
their network performance and stability.


Secure and Policy-Compliant Source Routing
Abstract: In today’s Internet, inter-domain route control remains elusive;
nevertheless, such control could improve the performance, reliability, and utility
of the network for end users and ISPs alike. While researchers have proposed a
number of source routing techniques to combat this limitation, there has thus far
been no way for independent ASES to ensure that such traffic does not
circumvent local traffic policies, nor to accurately determine the correct party to
charge for forwarding the traffic. We present Platypus, an authenticated source
routing system built around the concept of network capabilities, which allow for
accountable, fine-grained path selection by cryptographically attesting to policy
compliance at each hop along a source route. Capabilities can be composed to
construct routes through multiple ASES and can be delegated to third parties.
Platypus caters to the needs of both end users and ISPs: users gain the ability to
pool their resources and select routes other than the default, while ISPs maintain
control over where, when, and whose packets traverse their networks. We
describe the design and implementation of an extensive Platypus policy
framework that can be used to address several issues in wide-area routing at both
the edge and the core, and evaluate its performance and security. Our results show
that incremental deployment of Platypus can achieve immediate gains.


Understanding and Mitigating the Effects of Count to
Infinity in Ethernet Networks
Abstract: Ethernet’s high performance, low cost, and ubiquity have made it the
dominant networking technology for many application domains. Unfortunately,
it’s distributed forwarding topology computation protocol—the Rapid Spanning
Tree Protocol (RSTP)—is known to suffer from a classic count-to-infinity
problem. However, the cause and implications of this problem are neither
documented nor understood. This paper has three main contributions. First, we
identify the exact conditions under which the count-to-infinity problem manifests
itself, and we characterize its effect on forwarding topology convergence. Second,
we have discovered that a forwarding loop can form during count to infinity, and
we provide a detailed explanation. Third, we propose a simple and effective
solution called RSTP with Epochs. This solution guarantees that the forwarding
topology converges in at most one round-trip time across the network and
eliminates the possibility of a count-to-infinity induced forwarding loop.


A Large-Scale Hidden Semi-Markov Model for Anomaly
Detection on User Browsing Behaviors
Abstract: Many methods designed to create defenses against distributed denial
of service (DDOS) attacks are focused on the IP and TCP layers instead of the
high layer. They are not suitable for handling the new type of attack which is
based on the application layer. In this paper, we introduce a new scheme to
achieve early attack detection and filtering for the application-layer-based DDOS
attack. An extended hidden semi-Markov model is proposed to describe the
browsing behaviors of web surfers. In order to reduce the computational amount
introduced by the model’s large state space, a novel forward algorithm is derived
for the online implementation of the model based on the M-algorithm. Entropy of
the user’s HTTP request sequence fitting to the model is used as a criterion to
measure the user’s normality. Finally, experiments are conducted to validate our
model and algorithm.


Web User-Session Inference by Means of Clustering
Techniques
Abstract: This paper focuses on the definition and identification of “Web usersessions”,
aggregations of several TCP connections generated by the same source
host. The identification of a user-session is non trivial. Traditional approaches rely
on threshold based mechanisms. However, these techniques are very sensitive to
the value chosen for the threshold, which may be difficult to set correctly. By
applying clustering techniques, we define a novel methodology to identify Web
user-sessions without requiring an a priori definition of threshold values. We
define a clustering based approach, we discuss pros and cons of this approach, and
we apply it to real traffic traces. The proposed methodology is applied to
artificially generated traces to evaluate its benefits against traditional threshold
based approaches. We also analyze the characteristics of user-sessions extracted
by the clustering methodology from real traces and study their statistical
properties. Web user-sessions tend to be Poisson, but correlation may arise during
periods of network/hosts anomalous behavior


A Swarm Intelligence-Based P2P File Sharing Protocol
Using Bee Algorithm
Abstract: A P2P file sharing system implementation on mobile ad-hoc networks
is quite tricky to implement as compared to that on a wired network. With the use
of Swarm Intelligence, the P2P file Sharing methodology not only has an
optimized search process involving a more selective node tracing but also
provides a far more time efficient and robust sharing mechanism. A P2P File
sharing system implementation poses (a) percentage network area scanned and (b)
selective file retrieval from a set of file bearing nodes as the biggest challenge. In
this paper, we propose to use another Swarm Intelligence Technique Bees
Algorithm - P2PBA (Peer to Peer file sharing – Bees Algorithm) to tackle these
issues. Based on the liNES of food search behavior of Honey Bees, it optimizes
the search process by selectively going to more promising honey sources and scan
through a sizeable area. Following a description of the algorithm, the paper gives
simulation results for the network against specified parameters that our algorithm
proposes to make file sharing technique more efficient.


An Implementation of Bluetooth Service Application on the
Peer-to-Peer Based Virtual Home Platform
Abstract: This paper shows a design of the Bluetooth service application
architecture on the Peer-to-Peer based Virtual Home platform and an
implementation case. According to this method, it is possible for the Bluetoothbased
edge peer to participate in the IP-based virtual home network so that even
the Bluetooth device dependent services can be provided to the IP-based edge
peer in a distributed P2P manner beyond the personal area network.


Honey pot Scheme for Distributed Denial-of-Service Attack
Abstract: Honey pots are physical or virtual machines successfully used as
intrusion detection tools to detect worm-infected hosts. Denial of service (DOS)
attack consumes the resources of a remote client or network itself, there by
denying or degrading the service to the legitimate users. In a DOS defense
mechanism, a honey pot acts as a detective server among the pool of servers in a
specific network; where any packet received by the honey pot is most likely a
packet from an attacker. This paper points out a number of drawbacks such as
Legitimate Attacker and Link Unreachable problem in the existing honey pot
schemes. This paper proposes a new efficient honey pot model to solve all the
existing problems by opening a virtual communication port for any specific
communication between an authorized client and server; and by providing facility
to act as an Active Server (AS) for any honey pot.


WPCC: A Novel Web Proxy Cache Cluster
Abstract: In order to enhance the web proxy cache cluster’s performance, a new
novel web proxy cache cluster-WPCC is presented. WPCC divides all back ends
into two groups: group A and group B. Group A takes charge for hit requests and
group B takes charge for miss requests. WPCC uses different load balancing
strategy in different back end group. Back end can be migrated from one back end
group to another when load imbalance between back end groups happens.
Simulations results show WPCC can achieve outstanding performance than
existing web proxy cache clusters.


Competitive FIFO Buffer Management for Weighted
Packets
Abstract: Motivated by providing differentiated services on the Internet, we
consider efficient online algorithms for buffer management in network switches.
We study a FIFO buffering model, in which unit-length packets arrive in an online
manner and each packet is associated with a value (weight) representing its
priority. The order of the packets being sent should comply with the order of their
arriving time. The buffer size is finite. At most one packet can be sent in each
time step. Our objective is to maximize weighted throughput, defined by the total
value of the packets sent. In this paper, we design competitive online FIFO
buffering algorithms, where competitive ratios are used to measure online
algorithms’ performance against the worst-case scenarios. We first provide an
online algorithm with a constant competitive ratio 2. Then, we study the
experimental performance of our algorithm on real Internet packet traces and
compare it with all other known FIFO online competitive algorithms. We
conclude that for the same instance, the algorithms’ experimental performances
could be different from their competitive ratios; other factors such as packet flow
characteristics and buffer sizes affect the outcome. Our algorithm outperforms
other online algorithms when the buffer resource is limited.


P2P File Sharing Software in IPv4/IPv6 Network
Abstract: This paper designed and realized a P2P file sharing system (FSP2P)
by using Java network programming. An adapter was contained in FSP2P. By
using the adapter, FSP2P not only can be array to pure IPv4 or IPv6 network, but
also can be array to IPv4/IPv6 coexistent network. So, it realized transparent
connect between IPv4 and IPv6 in P2P file sharing system.


Design and Implementation of a SIP-based Centralized
Multimedia Conferencing System
Abstract: Multimedia conferencing is becoming a hot topic of communication in
recent years. There are already a few products of multimedia conferencing based
on H.323. SIP, which is a more feasible protocol, is put on the agenda of being the
call signaling protocol for conferencing. Most of the researches on SIP-based
multimedia conferencing, however, have still remained on theories or
experiments. In this paper, we propose a feasible framework of SIP-based
centralized multimedia conferencing, which meets the requirements of the
standards and also develops the theories of XCON framework proposed by IETF.
We also present an actual implementation of the centralized conferencing server
by exploiting open source achievements, using a few accessory devices for
multimedia and data collaboration applications which are also implemented in our
laboratory. This paper introduces the whole architecture of the practical system,
and expatiates on the flows of conference process.


Design and Implementation of Distributed Firewall System
for IPv6
Abstract—The deployment of the IPv6 network becomes to be realized as the
necessity of the IPv6 network is enlarged due to the limit of the IPv4 network.
However, the security policy about the IPv6 network is not mature and it becomes
an obstacle in the IPv6 network deployment. Attackers can detour the access
control of packet filtering system, unless packet filtering system can decrypt IP
Sec packet. This paper introduces the implementation of Distributed Firewall
System (DFS) that can be applicable to the IPv6 network and has capabilities of
processing encrypted IP Sec packet. The prototype introduced in this paper has
been implemented in order to be applied to the IPv6 network preferentially.
Although it has a limit to forward performance, the prototype can give the basic
concepts toward the IPv6-based DFS equipment.


Hybrid Classifier Systems for Intrusion Detection
Abstract: This paper describes a hybrid design for intrusion detection that
combines anomaly detection with misuse detection. The proposed method
includes an ensemble feature selecting classifier and a data mining classifier. The
former consists of four classifiers using different sets of features and each of them
employs a machine learning algorithm named fuzzy belief k-NN classification
algorithm. The latter applies data mining technique to automatically extract
computer users’ normal behavior from training network traffic data. The outputs
of ensemble feature selecting classifier and data mining classifier are then fused
together to get the final decision. The experimental results indicate that hybrid
approach effectively generates a more accurate intrusion detection model on
detecting both normal usages and malicious activities.

Online Classification of Network Flows
Abstract: Online classification of network traffic is very challenging and till an
issue to be solved due to the increase of new applications and traffic encryption.
In this paper, we propose a hybrid mechanism for online classification of network
traffic, in which we apply a signature-based method at the first level, and then we
take advantage of a learning algorithm to classify the remaining unknown traffic
using statistical features. Our evaluation with over 250 thousand flows collected
over three consecutive hours on a large scale ISP network shows promising results
in detecting encrypted and tunneled applications compared to other existing
methods.


A New Data-Mining Based Approach for Network Intrusion
Detection
Abstract: Nowadays, as information systems are more open to the Internet, the
importance of secure networks is tremendously increased. New intelligent
Intrusion Detection Systems (IDSS) which are based on sophisticated algorithms
rather than current signature-base detections are in demand. In this paper, we
propose a new data-mining based technique for intrusion detection using an
ensemble of binary classifiers with feature selection and multi boosting
simultaneously. Our model employs feature selection so that the binary classifier
for each type of attack can be more accurate, which improves the detection of
attacks that occur less frequently in the training data. Based on the accurate binary
classifiers, our model applies a new ensemble approach which aggregates each
binary classifier’s decisions for the same input and decides which class is most
suitable for a given input. During this process, the potential bias of certain binary
classifier could be alleviated by other binary classifiers’ decision. Our model also
makes use of multi boosting for reducing both variance and bias. The
experimental results show that our approach provides better performance in terms
of accuracy and cost than the winner entry of the ‘Knowledge Development and
Data mining’ (KDD) ’99 cup challenge. Future works will extend our analysis to a
new ‘Protected Repository for the Defense of Infrastructure against Cyber
Threats’ (PREDICT) dataset as well as real network data.


BotCop: An Online Botnet Traffic Classifier
Abstract: A Botnet is a network of compromised computers infected with
malicious code that can be controlled remotely under a common command and
control (C&C) channel. As one the most serious security threats to the Internet, a
Botnet cannot only be implemented with existing network applications (e.g. IRC,
HTTP, or Peer to- Peer) but also can be constructed by unknown or creative
applications, thus making the Botnet detection a challenging problem. In this
paper, we propose a new online Botnet traffic classification system, called
BotCop, in which the network traffic are fully classified into different application
communities by using payload signatures and a novel decision tree model, and
then on each obtained application community, the temporal frequent characteristic
of flows is studied and analyzed to differentiate the malicious communication
traffic created by bots from normal traffic generated by human beings. We
evaluate our approach with about 30 million flows collected over one day on a
large-scale WIFI ISP network and results show that the proposed approach
successfully detects an IRC Botnet from about 30 million flows with a high
detection rate and a low false alarm rate.


Secure Data Harvesting in the Federation of Repositories
Managed by Multi Agent Systems
Abstract: Multi Agent Systems (MAS) are used for efficient data processing on
hardware resources. We decided to take benefits of such a system to establish an
efficient intelligent fast search system, which will offer advanced search facilities
needed by large distributed systems consisting of metadata repositories. In this
article we present troubleshooting of organizing such a system with high security.


An Agent-based Model for the Evolution of the Internet
Ecosystem
Abstract: We propose an agent-based model for the evolution of the Internet
ecosystem. We model networks in the Internet as selfish agents, each of which
tries to maximize a certain utility function in a distributed manner. We consider a
utility function that represents the monetary profit of a network. Our model
accounts for various important constraints such as geography, multi homing, and
various strategies for provider and peer selection by different types of networks.
We implement this model in a simulator, which is then used to solve the model
and determine a “steady-state” of the network. We then present a set of .what-if.
Questions that can be answered using the proposed model and by studying the
properties of the resulting steady state.


Leader Election in Peer-to-Peer Systems
Abstract: This paper considers the problem of electing a leader in dynamic peerto-
peer systems in which nodes behavior in high rate of churn. Usually the leader
node is used to coordinate some tasks. Leader election influences on the
performance of the system in two stages; the election operation and the result of
the election as example leader's ability in making its responsibilities. Because of
the Peer-to-Peer systems is high dynamic, it was focus on the first stage in order
to elect the leader with high efficient method. So, the proposed algorithm is with
complexity O (1). In algorithm, the election operations are dependent only on
local knowledge. The correct NESS of the algorithm is analytically proved and its
scalability and efficiency experimentally are evaluated by using simulations.


Generic SNMP Proxy Agent Framework for Management of
Heterogeneous Network Elements
Abstract: Centralized management and monitoring of Network Elements (NES)
in a communication network is very critical to ensure high availability and lower
downtime. Many management protocols like SNMP,CMIP, TL1, CORBA etc.
have been standardized by different bodies to facilitate unified management of
different types of NES. However, many NES do not support these standard
management protocols for different reasons and instead provide proprietary
management mechanisms. To facilitate management of these NES, a
proxy/mediation function is required. In this paper, we present a Generic Proxy
Agent framework for management of heterogeneous Network Elements. This
framework consists of two components, namely an SNMP Proxy Agent and
Mediation device. The SNMP proxy agent uses SNMPv3 which provides a
comprehensive security framework which guarantees that the solution is not
vulnerable to most of the security violations. The framework uses SNMPv3
context names to differentiate the Network Elements from which information is
required. The proposed framework makes use of a standard open source Net-
SNMP package with an unique idea of mediation device which bridges Net-
SNMP agent and different types of Network Elements. The mediation device is a
separate software module which actually communicates to Network Elements by
converting SNMP requests into proprietary protocol messages and vice versa. The
proposed generic framework is implemented using Java, hence provides platform
independence. The proposed framework has been validated in the Very Small
Aperture Terminal (VSAT) communication network where large numbers of
Network Elements are heterogeneous in nature.



No comments: