Wednesday, January 20, 2021

Huawei NE40 & CE12800 on EVE-NG

If  you ever wanted to simulate a Huawei NE40 router or Huawei 12800 series switches, you might heard of eNSP (enterprise Network Simulator).  Unfortunately Huawei stopped developing eNSP and is not sharing the program anymore. You might still found it on the internet of course but it makes your job harder if you ever wanted to simulate a multi-vendor environment with Huawei software.

On Eve-NG website, Huawei NE40 and Huawei 12800 is not listed under how-to's part. But if you dig in Huawei forums, you could find that it is possible.  

Huawei CE12800

On this forum page you could find each and every detail to make Huawei CE12800  image work on EVE-NG: 

https://forum.huawei.com/enterprise/en/run-ce12800-ne40e-in-eve-ng/thread/653457-861?page=4 

You need to register to find to see the links. 

I'm just sharing again the info on the forum page. I tested it and yes it works: 

  • Download the CE12800 images and configuration file, also the CE12800 icon. And extract the download image.
  • Upload the configuration(huaweice12800.yml) into the EVE-NG path: /opt/unetlab/html/templates/intel/ If you are using the AMD CPU, the accordingly path is /opt/unetlab/html/templates/amd/ 

  • Upload the ce icon file(ce.png) into the EVE-NG path: /opt/unetlab/html/images/icons/
  • Upload the CE12800 image into the EVE-NG path: /opt/unetlab/addons/qemu/ 

When adding new images to eve-ng, the trick is that the naming convention of the .yml file need to match with the folder name. Folder name might have version but after the right name.

  • Fix the permission using command: /opt/unetlab/wrappers/unl_wrapper -a fixpermissions

Huawei NE40

NE40 img is shared also on the forum page: https://forum.huawei.com/enterprise/en/run-ne40-ce12800-in-eve-ng/thread/672651-865 .Unfortunately these images are most probabily converted from eNSP images, so there is no newer version. VRP8R11 is the latest version. If you can manage to convert an standart image (.img to qcow2 in the right  way ?!) newer versions might work also. 

Here are the files (link is shared on Huawei forum page): https://mega.nz/file/sQEyTRbB#PnT37xH0tHXjeJWTd8xu9L1jdeiHkBnsGCzju3z3DmY

The instructions are same with Huawei12800, just be careful with naming the .yml file and img folder to be consistent with each other.













Sunday, January 17, 2021

Microsoft Sonic Virtual Switch on Eve-NG

 Microsoft, SONiC is probabily the most populan open-source network operating system which is  currently being developed by a community of network market leaders: Broadcom, Marvell, Dell, Mellanox/NVIDIA, Intel, Microsoft and others. IT supports 101 whitebox platforms (by January 2021)  which you could find the latest list in https://github.com/Azure/SONiC/wiki/Supported-Devices-and-Platforms. 

To test and see how Sonic works it’s better to create a virtual topology (e.g. a data center Clos topology.) with a network emulation tool like GNS3, Eve-NG etc. On Sonic Github page there is a script to make Sonic-Virtual-Switch image with GNS3 but not for EVE-NG. I make Sonic work on Even-NG. Here are the instructions: 

I assume that you already installed Eve-NG and started using it. 

  • Copy the mssonic.yml file into “/opt/unetlab/html/templates/intel” or “/opt/unetlab/html/templates/amd” based on your cpu. 
  • Create a folder under “opt/unetlab/addons/qemu/” with name “mssonic-version/” like “mssonic-202012”

root@EVENG-SRLAB:~# mkdir pt/unetlab/addons/qemu/mssonic-202012

  • Extract sonic-vs.img.gz 
  • Rename sonic-vs.img to virtioa.qcow2 (just rename,  no convert) 
root@EVENG-SRLAB:/opt/unetlab/addons/qemu/mssonic-202012# mv sonic-vs.img virtioa.qcow2

The order of the EVE-NG ethernet and sonic ethernet is misaligned somehow. 10 interfaces are configured under yml file but only 9 are up in sonic. So :

eve-ng ethernet 1 == sonic ethernet0 (in below picture you need to configure ethernet0 in Sonic)  


Thursday, January 18, 2018

Huawei NetStream (Netflow) RFC Compability ? Part 2


Those who read Part 1 of this post know that  on Huawei routers for netstream packet sequence numbers to increase sequantially as defined in  RFC3954 "ip netstream export sequence-mode packet"must be exclusively configured. However a problem still exists: 

On Huawei routers with VRP5, no matter the device is configured with flow or packet mode, sequence numbers are independent for data and data template packets with current software design which results in for a netflow analyzer to misinterpret netflow data as there are missing flows. 

On Huawei routers with VRP8  the situation is a bit worse cause even the packet mode command "ip netstream export sequence-mode packet" do not exist. 

The solution for both sequence numbers to increase sequantially and for data and data template packets to have dependent flow numbers is to install V8R9C10 version. Other than that if you use VRP6 based software, you could increase data template frequency as in below configuration by changing refresh interval to 50minutes and every 30 packets  to get affected less: 
  [HUAWEI] system-view
 [~HUAWEI] ip netstream export template option timeout-rate 50
 [~HUAWEI] ip netstream export template option refresh-rate 30

Sunday, July 24, 2016

A Script to Control Quagga BGPD Daemon, Implementing Anycast DNS Server

One way to implement Anycast DNS Server throughout the internet is to use BGP protocol and to announce your DNS IP block from multiple locations so that redundancy and low-latency targets are achieved. There are many ways to design such a infrastructure; you could use hardware routers, you could use load-Balancers in front of your servers etc. I would like to share a script here for the sceanario where Quagga BGPD daemon is used as a software router either on the DNS server or any other server purely as a router:
Using python dnspython library, it is easy to perform checks of a DNS servers ability to respond to different type of DNS queries. Based on the DNS servers answer you could stop, start Quagga BGPD daemon.
You could check the examples on http://www.dnspython.org/examples.html to understand how dnspython is used. A simple A type request is as below:
test_domain = "www.test.com"
server_to_test = ["127.0.0.1"] 

answers = anycast_server1.query(test_domain, "A")
anycast_server1 = dns.resolver.Resolver()
anycast_server1.timeout=1.0
anycast_server1.lifetime=1.0
anycast_server1.nameservers = server_to_test
for data in answers: 
 print data
If DNS and BGPD daemon is on the same server, script should forward the query to local server: 127.0.0.1. Below is the script i wrote to check local server with an A type DNS request, and based on the answer act either to start or stop the BGPD daemon. You could download the latest version of the script from https://github.com/ercintorun/dns-check-quagga-act.
# -*- coding: utf-8 -*-
import dns.resolver, psutil, commands, time, logging, datetime

###VARIABLES 
script_run_time = 60 
test_domain = "www.test.com"
server_to_test = ["127.0.0.1"] 

#############
#############
#############
###logging file folder and logging level config
logging.basicConfig(filename='/var/log/dnsscript.log', filemode='a', level=logging.INFO,
                    format='%(asctime)s [%(name)s] %(levelname)s (%(threadName)-10s): %(message)s')

###define dns servers, parameters
anycast_server1 = dns.resolver.Resolver()
anycast_server1.timeout=1.0
anycast_server1.lifetime=1.0
anycast_server1.nameservers = server_to_test 
   
###time to run the script
starttime = time.time()
timeout = time.time()+ script_run_time

###kill process function 
def kill_process(PROCNAME):
 for proc in psutil.process_iter():
  if proc.name() == PROCNAME:
   proc.kill()

###start a loop with an amount of script_run_time value

while True:
 time.sleep(0.8)
 if time.time()> timeout:
  break
 else:
###get all daemon names into list
  daemon_list=[]
  for proc in psutil.process_iter():
   daemon_list.append(proc.name())
###start the control
  if "bgpd" not in daemon_list: 
   try:
    answers = anycast_server1.query(test_domain, "A")
    commands.getoutput ("/etc/init.d/bgpd restart")
    logging.warning("DNS successful, BGP daemon is down, bgpd restarted")
    time.sleep(2) #give bgp 2 second to get up again
   except dns.resolver.NXDOMAIN:
    logging.warning("DNS exception: No such domain, BGP daemon is already down, no change done")
   except dns.resolver.Timeout:
    logging.warning("DNS exception: Timed out while resolving, BGP daemon is already down, no change done ") 
   except dns.exception.DNSException:
    logging.warning("DNS exception: Unhandled exception, BGP daemon is already down, no change done") 
  else:
   try:
    answers = anycast_server1.query(test_domain, "A")
    for data in answers: 
     resolved = data
    logging.info("DNS successful, nothing has been changed, last resolved ip is: "+str(data))
   except dns.resolver.NXDOMAIN:
    kill_process("bgpd")
    logging.warning("DNS exception: No such domain, BGP daemon terminated")
   except dns.resolver.Timeout:
    kill_process("bgpd")
    logging.warning("DNS exception: Timed out while resolving, BGP daemon terminated")
   except dns.exception.DNSException:
    kill_process("bgpd")
    logging.warning("DNS exception: Unhandled exception, BGP daemon terminated")
If you examine the script you could see that it works for 60 seconds. If you add this script to crontab with 1 minute interval it will check DNS server 60-70 times per minute (on local server) continiously. I've used psutil library to fetch active daemons list on Linux to check whether BGPD daemon is active or not. Script,
  • If DNS successful and BGPD is active does nothing 
  • If DNS successful and BGPD is passive, restarts BGPD using "commands" libary 
  • IF DNS unsuccessful and BGPD is active, stops BGPD 
  • IF DNS unsuccessful and BGPD is passive, does nothing
Also you could see that the script logs its actions for each query to "/var/log/dnsscript.log". If you would like to decrease log size, you could change "level=logging.INFO" to "level=logging.WARNING" to not to log no-action-taken checks.

Monday, January 26, 2015

Validating BGP Announcements by Automating Filter Generation with Python: Part2


At first part of the post, i wrote that some tools use “whois”  client to query RIR databases. So it is possible to use a “whois cli tool” to gather the necessary data of your interest.

Whois

You can find whois flags for RIPE on http://www.ripe.net/data-tools/support/documentation/queries-ref-card  to elaborate the whois queries.  Lets pull routes of an AS34984 in Linux command Line:

root@test:~# whois -h whois.ripe.net -i or as34984 | grep route:
route:          151.250.0.0/16
…output omitted

In order to get route data as a variable, to play with, you still need to need to parse the related part of the data from  the output. A better approach, which i generally prefer , is to use a native library, so that your script won’t depend on  a tool,  as a result it would be executable both  on Windows, or or Linux/Mac.  For a native client library check  https://pypi.python.org/pypi/WhoisClient

Restful Web Services API


Even with a native library, you will still need to parse the output data. There is a better way, which is getting the data in a format like XML, YAML, JSON etc  so that it is easier to get related parts systematically.  Fortunately RIPE/ARIN has a RESTFul Web Services API  which is a REST interface to their WhoIS Database which are invoked via http requests. Here are the documentation links:

For ARIN documentation check: 


Querying RIPE  Web API and Processing  XML Input With Python


By looking at the documentation, it can be seen that for AS5400, which is BritishTelecom, url

http://rest.db.ripe.net/search.xml?query-string=as5400&inverse-attribute=origin  could be used to get all routes that has an origin of AS5400. The ony thing to get prefixes for another AS is to change the “as5400” value in the url itelf. If you open the url in your web browser you will see an xml output like below: 



So if we could open this url in python, and get the output, then parse it as xml, everything will be ok. To do so, lets examine the code i wrote, part by part:

Examining the code: 

At first part of the code we need to import libraries, which are: urllib, urllib2 and xml.etree.ElementTree   (you may also use lxml instead of xml.etree.ElementTree)


import urllib,urllib2
try:
    import xml.etree.cElementTree as ET
except ImportError:
    import xml.etree.ElementTree as ET

We determined how to create a URL to pull prefixes that is originated from an AS. Creating an AS variable is logical so that in further usages you can change the AS number or get the value as an input.

as_to_pull_prefixes = "AS5400"
url = "http://rest.db.ripe.net/search.xml?query-string=%s&inverse-attribute=origin" % as_to_pull_prefixes

As url is ready we need to open the url and assign the output to a string variable:

fp = urllib2.urlopen (url)
response = fp.read()

Then we need to parse XML from string into an element:

tree = ET.fromstring(response)
fp.close()

Last part is to get the prefix values from the xml output, which we turned into an element by using xml.etree.ElementTree. First we need to determine the hierarchy so that we can write a path argument. Having a look at the xml output, the hierarchy should be:

  • inside <objects> 
  • inside <object> that has a type value of  "route" (for ipv4 prefixes) 
  • inside <primary-key>
  • every <attribute> which has a name value of "route" 

To write such a path argument, you may wanted to have a look at https://docs.python.org/2/library/xml.etree.elementtree.html#elementtree-xpath. Here is the code:

interested =  tree.findall("./objects/object[@type='route']/primary-key/attribute[@name='route']")

Our job is still not done yet, cause inside the tree we created, which is "interested" variable, we need to take the values of "value": Here is the last piece of the code:

for child in interested:
    print child.get('value')

Trying It Out 

Here is the output when i executed the script:


To download the script, click here, or copy-paste the full code below:

import urllib,urllib2
try:
    import xml.etree.cElementTree as ET
except ImportError:
    import xml.etree.ElementTree as ET
###
###Variables which changes per request
as_to_pull_prefixes = "AS5400"
url = "http://rest.db.ripe.net/search.xml?query-string=%s&inverse-attribute=origin" % as_to_pull_prefixes
###
###Pull Info From IRR(RIPE)and assign it to a variable as string
fp = urllib2.urlopen (url)
response = fp.read()
###
###parse xml from  from string into an element 
tree = ET.fromstring(response)
fp.close()
###
###get interested data from element 
interested =  tree.findall("./objects/object[@type='route']/primary-key/attribute[@name='route']")
for child in interested:
    print child.get('value')
###############
##for more info on how to select interested data from xml
##https://docs.python.org/2/library/xml.etree.elementtree.html#elementtree-xpath
###############

Wednesday, January 21, 2015

Validating BGP Announcements by Automating Filter Generation with Python: Part1


Securing Internet Routing infrastructure has been a hot topic for a long time as hijack events occurs again and again either mistakenly or on purpose. Operators use different  techniques to validate BGP announcements. I will try to explain creating an ip/as prefix-list  by pulling information from a RIR database, in our case RIPE, using python. But first, lets take a look at other methods in Part1: 

RPKI

Probabily  the best  way to prevent BGP/IP hijacks is to use RPKI infrastructure, , which is a  PKI framework (trust anchor mechanism). Unfortunately the method needs legitamate AS/IP owners to register their sources. needless to say that no one is registering their sources, as a result the method is still not applicable. By 21.01.2015 the current validation states for all IPv4 prefixes is only %5.59, You can check validation states from http://rpki.surfnet.nl/ipcomp.html.


RPSL 

For RPKI is not applicable, we need another way to validate the BGP announcements, where IRR RPSL database entries comes into play. RPSL is a language commonly used by ISPs to describe their routing policies. ISP's could store their routing policies either on their own server or on open public whois databases including RIPE, RADB, APNIC etc.  RIPE (RIR for Europe) transformed its database to RPSL format in 2001.  Example entries in RIPE database are:

as-set:          AS-FUNET
descr:           Macro with all ASes exported by FUNET
members:         AS1741
members:         AS1739
members:         AS565
members:         AS15496
members:         AS30754
members:         AS39098
members:         AS39857
members:         AS39662
tech-c:          FH437-RIPE
admin-c:         FA1183-RIPE
mnt-by:          AS1741-MNT
source:          RIPE # Filtered
As-set object which was created by FUNET, and includes their members

aut-num:         AS5400
as-name:         BT
descr:           British Telecommunications plc
org:             ORG-CNS3-RIPE
import:          from AS1741 action pref=20; accept AS-FUNET
British Telecoms RPSL entry which accepts the informations in AS-FUNET  as-set entry.
Route−set: AS4763:RS−ROUTES:AS681
descr:     prefix filter for AS681
members:   130.216.0.0/16, 130.217.0.0/16,
132.181.0.0/16, 138.75.0.0/16, 139.80.0.0/16,
140.200.0.0/16, 156.62.0.0/16, 192.73.21.0/24
tech−c:    JA39
mnt−by:    MAINT−TELSTRA−NZ
changed:   jabley@patho.gen.nz 19991118
source:    RADB
Route-set sample

Using these databases it is possible to get infos for AS-SETs, AS entries, route objects etc. We know that RPKI validation states are really low, so what about the RPSL entries?  Unfortunately Inter Route Registries are not that accurate also. BGPmon analysis which is done at 2010, shows that only %46 of the global routing table entries has a  matching route-object. 


If  you wanted to update a BGP import policy automatically then you should force the facing AS owner to update their IRR entries (route-objects, as-sets…). Some providers area also very strict in the IRR usage , so  why shouldn’t you ? 

Getting  Info From RIR Databases

Tools

There are many great tools that are ready for use which does the whois queries behind, which some of are: 
  • IRR Toolset
  • IRR Power Tools
  • NETİ:IRR
  • Bgpq3
  • Md
  • P2BGPTool
Lets take a loot at one of these tools, namely  Bgpq3, before writing our own python script: 

Bgpq3 Installation and Usage

Download compressed bgpq3 file from http://snar.spb.ru/prog/bgpq3/ , extract and install it (below steps for Ubuntu):

admin@ubuntu:~# wget http://snar.spb.ru/prog/bgpq3/bgpq3-0.1.21.tgz 
admin@ubuntu:~#  tar -xvzf bgpq3-0.1.21.tgz 
admin@ubuntu:~#  cd bgpq3-0.1.21/
admin@ubuntu:~/bgpq3-0.1.21# ./configure
admin@ubuntu:~/bgpq3-0.1.21# sudo make && sudo make install


After installing we could create a prefix-list from command line using the tool. Below is a sample for creating Junos prefix list for as-set: AS-FUNET


You may check bgpq3 man page for  more parameters (cisco, juniper, as-path, ipv4, ipv6)

Definitely command line tools are useful  but if you wanted get data from command line output in  python script it means that you need to parse the data.  There are better ways which you may continue reading in Part 2. 


Monday, January 12, 2015

Huawei NetStream (Netflow) RFC Compability ?


Netflow, which has become a de-facto industry standard, is a network protocol developed by Cisco for collecting IP information and monitoring network traffic. Other vendors also support other alternative flow protocols like: 

  •  Juniper® (Jflow)
  •  3Com/HP® , Dell® , and Netgear® (s-flow)
  •  Huawei® (NetStream)
  •  Alcatel-Lucent® (Cflow)
  •  Ericsson® (Rflow)


By analizing the flow data you can draw a map of your traffic, sources and destinations, determine the normal flow values thus analyze ddos attacks, and much more. 

A sample configuration for Huawei is:

ip netstream export version 9
ip netstream sampler fix-packets 1000 inbound
ip netstream sampler fix-packets 1000 outbound
ip netstream export source 
ip netstream export host  9996
ip netstream export template timeout-rate 1
ip netstream timeout active 1
ip netstream timeout inactive 5
ip netstream tcp-flag enable
ip netstream mpls-aware label-and-ip

interface 
 ip netstream inbound
 ip netstream sampler fix-packets 1000 inbound

slot 
 ip netstream sampler to slot self

In production, based on the configuration above, we realized that packet sequence numbers, from a NE40X series router, does not increase sequentially as defined in RFC3954:


   Sequence Number
         Incremental sequence counter of all Export Packets sent from
         the current Observation Domain by the Exporter.  This value
         MUST be cumulative, and SHOULD be used by the Collector to
         identify whether any Export Packets have been missed. were .....

Instead the sequence numbers were increasing by former flow record (21, 9 21, 16,21 ...) count,

NetStream V9:
1          0.000000      1.1.1.256     82.3.3.256        CFLOW           1326   SEQ:510215546       total: 21 (v9) records
3          0.100533       1.1.1.256      82.3.3.256        CFLOW           606    SEQ:510215567       total: 9 (v9) records
4          0.163794       1.1.1.256      82.3.3.256        CFLOW           1326   SEQ:510215576       total: 21 (v9) records
5          0.264529       1.1.1.256      82.3.3.256        CFLOW           1026   SEQ:510215597       total: 16 (v9) records
6          0.327319       1.1.1.256      82.3.3.256        CFLOW           1326   SEQ:510215613       total: 21 (v9) records

which can be formalized as:

  • "SEQ2=SEQ1+counter"

In VRP, for Netstream to be compatible with RFC, sequence-mode configuration must be configured exclusively as it is not the default behavior.


  system-view
 [HUAWEI] slot x
 [HUAWEI-slot-x] ip netstream export sequence-mode packet

As sequence numbers are followed by SourceID in collectors, using "cflow.source_id==4" filter in wireshark (for this specific case) we verified that the problem is gone.


If you examine the capture carefully, you might see that there still is a little problem which is: even the sequence ID is the same, there is an extra packet which's flow number is not lined up.

If you are working with a Huawei router, be careful that many features that you expect it to work in default, must be exclusively configured.  I will share these commands in another post, hopefully.

You could contiue reading Part 2:

 

Internetworking Hints Copyright © 2011 -- Template created by O Pregador -- Powered by Blogger