Wednesday, May 22, 2019

IP Broadcasting and Multicasting in the Cloud

Broadcasting and Multicasting in the Cloud

In public clouds such as Amazon EC2Google Compute Engine and Microsoft Azure, native support for multicast and broadcast is missing. In fact, on AWS it has been on the "to do" list since 2009 see https://forums.aws.amazon.com/thread.jspa?messageID=280285 . Broadcast & multicast are integral parts of today's network solutions and this is a missed opportunity for all public cloud platforms.

Additionally, in public clouds Layer 2 access is generally limited by design of VPC, Security Groups and ACLs. This makes public clouds networking very different from datacenter, where there's usually full L2 access (even across VLANs using L2 routing methods such as SVI).

Broadcasting, Multicasting, Anycasting & Unicasting

Before delving into broadcast and multicast, let's take a look at the most common addressing mode in IP networks - unicast. In IP network, the most common addressing mode is unicast where 2 hosts on the network can communicate with each other. It's a typical server client topology. The vast majority of Internet is unicast where servers serve continuous request of billions of client end point (mobile, IoT and traditional PCs, laptops). The reason behind this architecture is based on the type of protocol used (TCP). TCP is preferred because of it's guaranteed delivery and recovery mechanism. Since TCP is only unicast, we have majority of the internet as unicast. Please note that UDP on the other hand can be used with unicast, multicast & broadcast packets.  

In a broadcast addressing mode (see RFC919 in October 1984), a packet is address to all hosts in a local network rather than being sent/addressed to a single host.

In multicast (see RFC966 in December 1985), which is basically a subset of broadcast mode, a packet is not addressed to all hosts but it's instead addressed to a group of hosts called a "multicast group". Multicast groups are dynamic by default where any host can join and leave it "on the fly" and rejoin using a protocol called IGMP (Internet Group Management Protocol). A multicast group is defined by an IP address which can range from multicast reserved range (224.0.0.0 - 239.255.255.255).

On a host joining the multicast group, it will start receiving messages addressed to the group. For multicast addressing, UDP (User Datagram Protocol). 

When a host is joined to a multicast group, it receives messages addressed to the group. The protocol that is most commonly used with multicasting is the User Datagram Protocol (UDP). UDP is a very flexible protocol that can work with any addressing mode. TCP on the other hand works with unicast only.

IPv4 has only unicast, multicast & broadcast.
IPv6 has unicast, anycast, multicast & broadcast.

Anycast is a relatively newer addressing mode (kind of a subset of multicast) where a packet is sent of only a single host within a multicast group. Please note that anycast is present in IPv6 only.

Broadcasting and Multicasting at Layer 2

At layer 2, we deal with Ethernet which is the most prevalent Layer 2 protocol used today & a PDU here is called as a "frame". The ethernet frame has embedded source and destination MAC address, also called as a MAC address which is 48 bits hexadecimal address such as 01:23:45:67:89:01. 6 octets with the first 3 octets used as OUI (Organizationally Unique Identifier) and last 3 octets used exclusively to identify the device. Within the OUI's first octet the least significant bit (b0) identifies whether addressing is multicast or unicast and bit (b1) second least significant bit signifies whether the MAC address is locally or universally administered (locally unique or universally unique). 

So for example 06:00:00:00:00:00, where the first octet (06) is also represented as 00000110 has the b1 bit as 1 which means this is a locally administered address and not universally unique.

Now, a MAC address in an Ethernet frame is considered unicast if the b0 bit is set to 0 and broadcast if b0 bit is set to 1. In the above example of MAC 06:00:00:00:00:00, we have LSB in first octet set to 0 (06 = 0110) and hence the MAC address is unicast which means the frame to which this address belongs is a unicast PDU and is encompassing a unicast packet and is meant to reach only a single host/NIC/node unlink a broadcast frame which will be delivered to all hosts/nodes/NICs in the collision domain. For multicast as well, this bit (b0) is set to 1 with the caveat that it is broadcast to only those hosts which have joined a specific multicast group!

When an IP unicast packet is passed to layer 2 so that it can be sent to the next hop, it is wrapped in an unicast Ethernet frame. The MAC address of the next hop is determined using a protocol called the Address Resolution Protocol (ARP, which incidentally uses broadcast Ethernet frames to find out the Mac address for a given IP). If a switch is unaware of the port which leads to a given MAC unicast address in the frame then it will forward the unicast frame to all of it's port (except the originating port), an action known as unicast flood

IP broadcast and multicast do not use ARP. IP broadcasts are always sent to the "all-ones" Ethernet address ff:ff:ff:ff:ff:ff. Since the low bit of the high byte is a 1, this is a broadcast address, and it will be delivered to all hosts on the L2 network. IP multicast instead uses a formula to convert the IP multicast group address to an Ethernet address. This formula is described in RFC1112. The group address 224.1.2.4 for example is translated to 01:00:52:01:02:04. The mapping is not unique: multiple group addresses correspond to the same broadcast address on the Ethernet.

Applications using IP Multicasting

While many more applications use unicast addressing, multicasting does have a few important use cases. The two main areas seem to be infrastructure for high availability solutions, and to implement "zero config" discovery mechanisms. 
Examples of high availability solutions that use multicasting are the well-known keepalived (an implementation of Cisco's Virtual Router Redundancy Protocol or VRRP), uCarp, the Red Hat Cluster Suite (based on the open source Corosync/OpenAIS projects) and JGroups. In this category, there is also the venerable Veritas Cluster Server (VCS). It should be mentioned that some of these projects have grown unicast support recently, exactly because of the lack of multicasting in the cloud. However in all cases the most optimal solution is to use multicasting. The multicast networking in this category is used to send "heartbeat" messages. All nodes listen to these messages. If, at some point, a message is not received for a certain amount of time, the nodes assume something went wrong and can start a corrective action. At Layer 2 level, many solutions such as MSCS (Microsoft Clustering Services) and many other solutions also use multicast to send "heartbeat" messages
Examples of discovery solutions that use multicasting include the Apple Bonjour/Zeroconf protocol (also known as multicast DNS or DNS service discovery), the Java NoSQL databases Hazelcast and EhCache, and the Oracle Grid Infrastructure. These solutions use multicast to announce a presence or a status on the network, without having to explicitly configure which other nodes exist.

Conclusion

People have tried to work around the lack of multicasting using various OS level tools. A few interesting ones are using n2n to set up a peer to peer L2 VPN between virtual machines, or using various approaches to turn multicast into unicast. Some of these approaches may have valid use cases. That said, in all cases, these solutions add significant complexity, and push what is essentially a network responsibility back into the OS.

Tuesday, April 16, 2019

SMB/CIFS/SAMBA/NFS

SMB

So what is SMB? SMB stands for “Server Message Block.” It’s a file sharing protocol that was invented by IBM and has been around since the mid-eighties. Since it’s a protocol (an agreed upon way of communicating between systems) and not a particular software application, if you’re troubleshooting, you’re looking for the that is said to implement the SMB protocol.

The SMB protocol was designed to allow computers to read and write files to a remote host over a local area network (LAN). The directories on the remote hosts made available via SMB are called “shares.”

CIFS

CIFS stands for “Common Internet File System.” CIFS is a dialect of SMB. That is, CIFS is a particular implementation of the Server Message Block protocol, created by Microsoft.
CIFS vs SMB

Most people, when they use either SMB or CIFS, are talking about the same exact thing. The two are interchangeable not only in a discussion but also in application – i.e., a client speaking CIFS can talk to a server speaking SMB and vice versa. Why? Because CIFS is a form of SMB.

While they are the same top level protocol, there are still differences in implementation and performance tuning (hence the different names). Protocol implementations like CIFS vs SMB often handle things like file locking, performance over LAN/WAN, and mass modification of file differently.

CIFS vs SMB: Which One Should I Use?

In this day and age, you should always use the acronym SMB.

I know what you’re thinking – “but if they’re essentially the same thing, why should I always use SMB?”

Two reasons:-

1.) The CIFS implementation of SMB is rarely used these days. Under the covers, most modern storage systems no longer use CIFS, they use SMB 2 or SMB 3. In the Windows world, SMB 2 has been the standard as of Windows Vista (2006) and SMB 3 is part of Windows 8 and Windows Server 2012.

2.) CIFS has a negative connotation among pedants. SMB 2 and SMB 3 are massive upgrades over the CIFS dialect, and storage architects who are near and dear to file sharing protocols don’t appreciate the misnomer. It’s kind of like calling an executive assistant a secretary.

Samba and NFS

CIFS and SMB are far from the entirety of file sharing protocols and if you’re working to make legacy systems interoperate, it is quite likely that you’re also going to run into situations where others are necessary. Two other prominent file sharing protocols you should know about are Samba and NFS.
SAMBA
What is Samba? Samba is a collection of different applications with when used together let a Linux server perform network actions like file serving, authentication/authorization, name resolution and print services.

Like CIFS, Samba implements the SMB protocol which is what allows Windows clients to transparently access Linux directories, printers and files on a Samba server (just as if they were talking to a Windows server).

Crucially, Samba allows for a Linux server to act as a Domain Controller. By doing so, user credentials on the Windows domain can be used instead of needing to be recreated and then manually kept in sync on the Linux server.

NFS

The acronym NFS means “Network File System.” The NFS protocol was developed by Sun Microsystems and serves essentially the same purpose as SMB (i.e., to access files systems over a network as if they were local), but is entirely incompatible with CIFS/SMB. This means that NFS clients can’t speak directly to SMB servers.

So what does NFS mean in terms of your network communications toolkit? You should use NFS for dedicated Linux Client to Linux Server connections. For mixed Windows / Linux environments use Samba.

Wednesday, April 10, 2019

AWS CSAA - AWS Certified Solutions Architect Associate - 2019 - 4 Week Learning Path



Week 1:-

To get a good overview of basic concepts and architecture one should get started with the official Guide AWS Certified Solutions Architect Official Study Guide: Associate Exam. You can watch the videos from https://www.udemy.com/aws-architect/learn/ and  read the guide as both are synched and it will help go through the written material faster.

Week 2:-

Go through the Linux Academy and BackSpace Academy Videos as these cover a lot more detailed scenarios with labs. I particularly recommend "The Orion Papers" which i found to be very useful. The concepts are very well explained with visual diagrams where any single area let's say AWS databases is covered at a high level in one single image. This aids in recalling the concepts and applying them to scenario specific questions in the exam correctly.

Week 3:-

Go through a cloud Guru Videos,  FAQs  for all major AWS services in AWS.
Review the AWS Whitepapers

AWS Well-Architected Framework Whitepaper
AWS_Risk_and_Compliance_Whitepaper
AWS_Security_Whitepaper
AWS_Cloud_Best_Practices
AWS_Overview
AWS_Storage-Options


Week 4:-

Take Practice Tests

Braincert AWS Solutions Architect – Associate SAA-C01 Practice Exams, which provide extensive scenario based questions
Udemy AWS Solutions Architect – Associate SAA-C01 Practice Exams


Also Refer to CheatSheet before exam day here




Troubleshooting EC2 instances 


References :-

https://www.udemy.com/aws-architect/learn/
https://www.udemy.com/linux-academy-aws-certified-solutions-architect-associate/learn/
https://www.udemy.com/aws-certified-associate-architect-developer-sysops-admin/learn/
https://www.udemy.com/aws-certified-solutions-architect-associate/learn/

Thursday, October 11, 2018

Modifying Default Inactivity Timeout vSphere Web Client ,

For Vsphere 6.5 Web Client

From VCSA cli, modify the property named session.timeout = 120 to 0 in vi /etc/vmware/vsphere-client/webclient.properties and then, restart the service vsphere-client service

service-control –stop vsphere-client
service-control –start vsphere-client



Steps


  1. Connect (ssh) to VCSA using root password, 
  2. vi /etc/vmware/vsphere-client/webclient.properties
  3. Modify session.timeout = 120 to 0 in above file
  4. Restart vsphere-client service service-control –stop vsphere-client; service-control –start vsphere-client




Tuesday, June 26, 2018

SSD & it's Types (SLC, MLC, TLC)

The Anatomy of an SSD



SSD with two enclosed NAND flash memory chips installed. The controller chip is designed by PHISON.
    A. NAND Flash: The part where your data is stored, in blocks of non-volatile (does not require power to maintain data) memory.
    B. DDR Memory: Small amount of volatile memory (requires power to maintain data) used to cache information for future access. Not available on all SSDs.
    C. Controller: Acts as the main connector between the NAND flash and your computer. The controller also contains the firmware that helps manage your SSD.

TYPES OF SSD
  1. Single Level Cell (SLC)
  2. eMLC (enterprise Multi Level Cell)
  3. MLC (Multi Level Cell)
  4. TLC (Triple Level Cell)
Flash Type SLC (Single Level Cell) eMLC (enterprise Multi Level Cell) MLC (Multi Level Cell) TLC (Triple Level Cell)
Read-Write Cycles 90k-100k 20k-30k 8k-10k 3k-5k
Cost Most Expensive Medium Lower Cheapest
Endurance Highest Medium Lower Lowest
Bits per Cell 1 2 2 3
Usage Enterprise Enterprise Consumer/Gaming Consumer
Write Speed Highest Medium Lower Lowest
Reference :-
https://www.mydigitaldiscount.com/everything-you-need-to-know-about-slc-mlc-and-tlc-nand-flash.html

Thursday, May 24, 2018

Python API Execution Server Setup


In this post, I'm outling the process i used to setup an execution server used to run API tests using python.

Setup an API automation ready execution server

  1. Confirm Python -version (python --version)
  2. Install pip (yum -y install python-pip)
  3. Install pip environment (https://docs.pipenv.org/) (pip install pipenv)
  4. Upgrade it (pip install --upgrade pip)
  5. Install requests module (pip2.7 install requests)
  6. Install git (Yum install gut -y)
  7. Clone the git repo (git clone git://github.com/requests/requests.git)
  8. Cd requests; pip install.

I also found the below guides very useful to get a fundamental understanding of api testing. 
  1. Beginner's Guide to API Automation with Python - https://www.grossum.com/blog/beginner-s-guide-to-automating-api-tests-using-python
  1. API Tutorials - https://www.dataquest.io/blog/python-api-tutorial/


GET Status codes
  • 200 -- everything went okay, and the result has been returned (if any)
  • 301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed.
  • 401 -- the server thinks you're not authenticated. This happens when you don't send the right credentials to access an API (we'll talk about authentication in a later post).
  • 400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things.
  • 403 -- the resource you're trying to access is forbidden -- you don't have the right permissions to see it.
  • 404 -- the resource you tried to access wasn't found on the server.

Finding your way in vi (the editor)

While in command mode (case sensitive)
  • move the cursor with arrow keys; if there aren't any arrow keys, use j,k,h,l (Fn + left/right key to navigate to start/end of line)

  • i - change to insert mode (before cursor)
  • a - change to insert mode (after cursor)
  • A - change to insert mode (at end of line)
  • r - replace one character
  • R - overwrite text
  • x - delete one character
  • dd - delete one line
  • yy - yank line (copy)
  • p - paste deleted or yanked text after cursor
  • P - paste deleted or yanked text before cursor
  • G - go to end of the file
  • 1G - go to top of the file
  • J - merge next line with this one
  • / - search, follow / with text to find
  • :wq - write file and quit
  • :q! - quit without saving
  • %s/old/new/g - substitute; replace "old" with "new" on all lines
  • :g/pattern/d - delete all lines that match the pattern
  • 0 - move to the beginning of the current line
  • $ - move to end of line
  • H - move to the top of the current window (high)
  • M - move to the top of the current window (middle)
  • L - move to the top of the current window (low)
  • 1G - move to the first line of the file
  • 20G - move to the bottom line of the file
  • G - move to the last line of the file.
While in insert mode
  • ESC - change to command mode
  • any text typed is entered at the cursor
Typical vi session
  1. Type "vi file.txt" at command prompt
  2. Move cursor to where new text will be added
  3. Type "i" to change to insert mode
  4. Type new text
  5. Type ESC to go back to command mode
  6. type ":wq" and ENTER to write the file and quit

Thursday, May 3, 2018

Jenkins installation as a service using .war files on a Virtual Machine


Steps to bring up a Jenkins instance on a centOS7 instance


1. Bringup a VM and & Install any linux distro of your preference. In this case, I'm using CentOS7.
2. Install supported Jenkins v1.624 wget https://updates.jenkins-ci.org/download/war/1.624/jenkins.war.
3. Install Java 7 yum install java-1.7.0-openjdk (https://www.atlantic.net/cloud-hosting/how-to-install-java-jre-jdk-centos-7/) and configure the OS to use java 1.7 by default
4. Set environment (export $JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.171-2.6.13.0.el7_4.x86_64" | sudo tee -a /etc/profile;  echo 'export $JRE_HOME=/usr/lib/jvm/jre/' | tee -a /etc/profile; source /etc/profile)
5. Install Jenkins (cd ~; java -jar jenkins.war)
6. Configure Jenkins as a service ( cd /etc/systemd/system; vi jenkins.service and add below lines to it.

[Unit]
Description=Jenkins Service
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/bin/java -jar /usr/local/bin/jenkins.war
Restart=on-abort

[Install]
WantedBy=multi-user.target

7. Start the Jenkins Service (systemctl daemon-reload; systemctl start jenkins.service)


Establishing trust between Jenkins and other applications

Get the certificate of the application (in this case it's named as 'ccm.cer')
  1. export $JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.171-2.6.13.0.el7_4.x86_64" | sudo tee -a /etc/profile
  2. echo 'export $JRE_HOME=/usr/lib/jvm/jre/' | tee -a /etc/profile
  3. source /etc/profile

  1. echo $JRE_HOME
  2. echo $JAVA_HOME

  1. java InstallCert 10.193.180.190
  2. cp jssecacerts $JAVA_HOME/lib/security
  3. cp jssecacerts ~/.keystore
  4. keytool -list -alias 10.193.180.190-1
  5. Cd /usr/lib/jvm/jre-1.8.0-openjdk/lib/security
  6. keytool -importcert -file /root/ccm.cer -keystore cacerts -alias 10.193.180.190-1
  7. keytool -list -alias 10.193.180.190-1
  8. reboot
  9. systemctl status jenkins.service
  10. systemctl start jenkins.service
  11. systemctl status jenkins.service
  12. java SSLPoke 10.193.180.190 443


Tuesday, December 19, 2017

HTTP Status codes & 7 most common HTTP Methods for API Testing

HTTP STATUS CODES

The list of http status codes are listed here and also in RFC 7230. However, the status codes do not define the error response when things go wrong. The error response format is defined in RFC 7807 .
Overall the status codes are in the range of 1xx-5xx & are divided into below 5 classes with first digit signifying the class of response.
  1. 1xx Informational response
  2. 2xx Success
  3. 3xx Redirection
  4. 4xx Client errors
  5. 5xx Server errors
The following codes are not specified by any standard.

103 Checkpoint
Used in the resumable requests proposal to resume aborted PUT or POST requests.
218 This is fine (Apache Web Server)
Used as a catch-all error condition for allowing response bodies to flow through Apache when ProxyErrorOverride is enabled. When ProxyErrorOverride is enabled in Apache, response bodies that contain a status code of 4xx or 5xx are automatically discarded by Apache in favor of a generic response or a custom response specified by the ErrorDocument directive.
419 Page Expired (Laravel Framework)
Used by the Laravel Framework when a CSRF Token is missing or expired.
420 Method Failure (Spring Framework)
A deprecated response used by the Spring Framework when a method has failed.
420 Enhance Your Calm (Twitter)
Returned by version 1 of the Twitter Search and Trends API when the client is being rate limited; versions 1.1 and later use the 429 Too Many Requests response code instead.
430 Request Header Fields Too Large (Shopify)
Used by Shopify, instead of the 429 Too Many Requests response code, when too many URLs are requested within a certain time frame.
450 Blocked by Windows Parental Controls (Microsoft)
The Microsoft extension code indicated when Windows Parental Controls are turned on and are blocking access to the requested webpage.
498 Invalid Token (Esri)
Returned by ArcGIS for Server. Code 498 indicates an expired or otherwise invalid token.
499 Token Required (Esri)
Returned by ArcGIS for Server. Code 499 indicates that a token is required but was not submitted.
509 Bandwidth Limit Exceeded (Apache Web Server/cPanel)
The server has exceeded the bandwidth specified by the server administrator; this is often used by shared hosting providers to limit the bandwidth of customers.[79]
526 Invalid SSL Certificate
Used by Cloudflare and Cloud Foundry's gorouter to indicate failure to validate the SSL/TLS certificate that the origin server presented.
529 Site is overloaded
Used by Qualys in the SSLLabs server testing API to signal that the site can't process the request.
530 Site is frozen
Used by the Pantheon web platform to indicate a site that has been frozen due to inactivity.
598 (Informal convention) Network read timeout error
Used by some HTTP proxies to signal a network read timeout behind the proxy to a client in front of the proxy.

Internet Information Services
  1. 440 Login Time-out
  2. 449 Retry With
  3. 451 Redirect
nginx
  1. 444 No Response
  2. 494 Request header too large
  3. 495 SSL Certificate Error
  4. 496 SSL Certificate Required
  5. 497 HTTP Request Sent to HTTPS Port
  6. 499 Client Closed Request
Cloudflare
  1. 520 Web Server Returned an Unknown Error
  2. 521 Web Server Is Down
  3. 522 Connection Timed Out
  4. 523 Origin Is Unreachable
  5. 524 A Timeout Occurred
  6. 525 SSL Handshake Failed
  7. 526 Invalid SSL Certificate
  8. 527 Railgun Error
  9. 530
AWS Elastic Load Balancer
  1. 460 - Client closed the connection with the load balancer before the idle timeout period elapsed. Typically when client timeout is sooner than the Elastic Load Balancer's timeout.
  2. 463 - The load balancer received an X-Forwarded-For request header with more than 30 IP addresses.

HTTP Methods

  1. GET
  2. POST
  3. PUT
  4. DELETE
  5. HEAD
  6. PATCH
  7. OPTIONS
GET requests are used to retrieve data from an API server at the specified resource. A successful/valid GET request will return a response with status code 200.

POST requests are used to send data from an API server to create or update a resource. A successful/valid POST request will return a response with status code 201.

PUT requests are used to send data to the API to create or update a resource. The difference is that PUT requests are idempotent i.e. calling the same PUT request multiple times will always produce the same result
In contrast, calling a POST request repeatedly may have side effects of creating the same resource multiple times. A successful/valid POST request will return a response with status code 200.

DELETE request deletes the resource at the specified URL. A successful/valid DELETE request will return a response with status code 200. A DELETE request where the specified resource is already deleted or non-existent will return a status code of 404 and a DELETE request to an unknown resource/specified URL will return a non-200 status code.

HEAD requests are same as GET but don't return the response body. For e.g if GET requests returns a list of names in the response body then HEAD request will make the same request but won't return the list of names. HEAD request is generally useful in confirming if a GET request will attempt to retrieve data successfully like before downloading a large file or a response body.

It's important to note that not every endpoint that support GET will support HEAD. This completely depends on the API you're testing.

PATCH applies only partial modifications to the resource. The difference between PATCH and PUT, is that a PATCH request is non-idempotent (like a POST request).
With a PATCH request, you may only need to send the updated username in the request body - as opposed to POST and PUT which require the full user entity.

OPTIONS request returns data describing what other methods & operations the server supports at the given URL. OPTIONS requests are more loosely defined & used than the others, making them a good candidate to test for fatal API errors. If an API isn't expecting an OPTIONS request, it's good to put a test case in place that verifies failing behavior. 


Monday, December 18, 2017

Overlay Networks

Overlay Networks

Network Overlays


These overlays have been in use to extend physical network (underlay) subnets/segments across physical boundaries. The routers and switches work as end points. Some examples are:- 
  1. OTV
  2. LISP
  3. Traditional VPNs
  4. FabricPath (Source Learning) - All Links Active Active

Host Overlays

The Hypervisor Vswitches or Physical switches like N9k act as end points, usually a single domain admin using a single controller like APIC can deploy and administer this kind of network,

VXLAN (Multicast - Flooding - UDP)
NVGRE (Unicast)
STT (Stateless Transport Tunneling)


Hybrid Overlays

The traffic can move between virtual and physical node, both working as end points to have a seamless extension of L2 physical boundary.

VXLAN - A tunneling protocol which encapsulates L2 Ethernet frames in Layer 3 UDP packets on port 4789, this allows L2 subnets spanning across physical L3 networks.

VXLAN has 24 bit = 16 million identifiers whereas VLAN as only 4096


  1. L2 VNI - VXLAN Network Identifier carried in VXLAN packets bridged across VTEPs.
  2. L3 VNI - VXLAN Network Identifier carried in VXLAN packets routed across VTEPs. This VNI is linked per tenant VRFs.
  3. VNI - VXLAN Network Identifier.
  4. VTEP - VXLAN Tunnel Endpoint.
  5. VXLAN L2 Gateway - VTEP capable of switching VLAN-VXLAN, VXLAN-VLAN packets within same VNI.
  6. VXLAN L3 Gateway - VTEP capable of routing VXLAN across different VNIs.

VXLAN packets received from the undelay networks with L4-L3-L2 headers are encapsulated in a new IP + MAC header (overlay) before being multicasted to multicast IP such as 239.1.1.2

Reference :- https://www.youtube.com/watch?v=kAoa7djX3Ew


Wednesday, August 2, 2017

7 Steps for Setting up a CentOS 7 NFS Server from Scratch

Steps:-

1. Setup Network & Hostname
2. Download all required packages (nfs-utils & nfs-utils-lib)
3. Create & Format the Partition (use parted for >2TB shares)
4. Configure the share point, mount the partition & configure auto mount
5. Append /etc/exports & /etc/fstab
6. Configure Firewall to allow NFS service
7. Restart Services & Reboot the Server.


I used the below commands (parted) to setup a 5TB NFS share on CentOS 7 Kernel 3.10.0-514.el7.x86_64

Configure Network & Hostname
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=newHostName
vi /etc/hosts 
127.0.0.1 newHostName
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
and then rebooting the system

Download all packages for NFS
Download all required packages
yum update
yum -y install nfs-utils
yum -y install nfs-utils-lib

yum install  0 0

With Fdisk
Fdisk used to create a new partition on new device sda/sdb
Format the new xfs file system using
# mkfs.xfs /dev/sdb1
Mount the xfs file system
# mkdir /mnt/db
# mount /dev/sdb1 /mnt/db
# mount | grep /dev/sdb1

Create the GPT partition table 
mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) unit GB


With Parted
Check partition using fdisk -l
Run parted utility
Select /dev/sdb

Create the partition
mkpart primary 0.0GB 4000.8GB
Quit parted now

Create the filesystem
mkfs.ext4 /dev/sdb1


Create Share point
mkdir /home/share
Change the permissions
chmod -R 777 /home/share/
Mount the partition
mount /dev/sdb1 /home/share

Automounting NFS Shares on Server Reboot:
Append text to the end of /usr/lib/systemd/system/nfs-idmap.service
[Install]
WantedBy=multi-user.target

Append text to the end of /usr/lib/systemd/system/nfs-lock.service
[Install]
WantedBy=nfs.target


Add the partition to /etc/fstab and mounting it
/dev/sdb1              /home/share        ext4    defaults        0 0


[root@localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Jul  5 17:25:55 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=f2c3a44a-621a-4dd9-b86d-cf5f2fdee4a0 /boot                   xfs     defaults        0 0
/dev/mapper/cl-home     /home                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1 /home/share ext4 defaults 0 0
[root@localhost ~]#


Append /etc/exports
vi /etc/exports
/home/share "192.168.0.0/16"(rw,sync,no_subtree_check,no_root_squash)


Configure Firewall to allow NFS service
firewall-cmd --permanent --zone public --add-service mountd
firewall-cmd --permanent --zone public --add-service rpc-bind
firewall-cmd --permanent --zone public --add-service nfs
firewall-cmd --reload

Restart Services
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap

systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

Reboot the host
Shutdown -r now

Sunday, March 26, 2017

CIFS and SMB


CIFS

Short for Common Internet File System, a protocol that defines astandard for remote file access using millions of computers at a time. CIFS is a dialect of the Server Message Block (SMB) protocol, which was originally developed by IBM Corporation and then further enhanced by Microsoft, IBM, Intel, 3Com, and others

With CIFS, users with different platforms and computers can share files without having to install new software. In general, CIFS is a better option over older file sharing protocols like FTP unless links are high latency (e.g WAN)
CIFS runs over TCP/IP but uses the SMB (Server Message Block)protocol found in Microsoft Windows for file and printer access; therefore, CIFS will allow all applications, not just Web browsers, to open and share files across the Internet.
With CIFS, changes made to a file are simultaneously saved on both the client and server side.

SMB

SMB (v1.0, 2.0 and 3.0) prevalent in newer windows OS versions whereas CIFS is a dialect which doesn't have all the features of SMB such as "batching" or change notification support. As per Microsoft, the protocol exchange between a server and client first selects a dialect with the highest level of functionality supported by both server and client. Initially CIFS or SMB might be selected based on Server & Client capability but if either the Server or Client fail to support all needed features of SMB like change notification support then SMB connections will be lost without notice to user with traffic shifting to CIFS only.


CIFS vs SMB, which one to use?

CIFS being a dialect & older implementation of SMB lacks important enhancements like multi-channeling and change notification support which are present in SMB 2.0 and SMB 3 version. Microsoft has been using SMB from Windows 2008 onwards and it remains the protocol of choice for transferring files between compatible nodes.

References : - 
https://docs.microsoft.com/en-us/windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
https://msdn.microsoft.com/en-us/library/ee442092.aspx