Overview

This chapter contains the following sections:

Cisco UCS B200 M6 Blade Server

The Cisco UCS B200 M6 blade server is a half-width blade server that is designed for the Cisco UCS 5108 Blade Server Chassis. You can install up to eight UCS B200 M6 blade servers in a UCS 5108 chassis, mixing with other models of Cisco UCS blade servers in the chassis if desired. The server supports the following features:

  • Two CPU sockets for Third Generation Intel Xeon Scalable family of CPUs support one or two CPU blade configurations.

  • Up to 32 DDR4 DIMMs (16 sockets/8 channels per CPU).

  • Support for Intel Optane persistent memory 200 series DIMMs.

  • One front mezzanine storage module with the following options:

    • Cisco FlexStorage module supporting two 7 mm SATA SSDs. A 12G SAS controller chip is included on the module to provide hardware RAID for the two drives.

    • Cisco FlexStorage module supporting two 7 mm NVMe SSDs.

    • Cisco FlexStorage module supporting two mini-storage modules, module "1" and module "2." Each mini-storage module is a SATA M.2 dual-SSD mini-storage module that includes an on-board SATA RAID controller chip. Each RAID controller chip manages two SATA M.2 dual SSD modules.

  • Rear mLOM, which is required for blade discovery. This mLOM VIC card (for example, a Cisco VIC 1440) can provide per fabric connectivity of 20G or 40G when used with the pass-through Cisco UCS Port Expander Card in the rear mezzanine slot.

  • Optionally, the rear mezzanine slot can have a Cisco VIC Card (for example, a Cisco VIC 1480) or the pass-through Cisco UCS Port Expander Card.


Note


Component support is subject to chassis power configuration restrictions.


Figure 1. Cisco UCS B200 M6 Blade Server Front Panel

1

Cisco FlexStorage Module, showing drive bays 1 and 2

2

Disk Drive Status LEDs for each drive.

3

Disk Drive activity LED for each drive

4

Asset pull tag

5

Local console connector

6

Blade ejector thumbscrew

7

Blade ejector handle

8

Blade power button and LED

9

Network link status LED

10

Blade health LED

11

Locator button and LED


Note


The asset pull tag is a blank plastic tag that pulls out from the front panel. You can add your own asset tracking label to the asset pull tag and not interfere with the intended air flow of the server.


External Features Overview

The features of the blade server that are externally accessible are described in this section.

LEDs

Server LEDs indicate whether the blade server is in active or standby mode, the status of the network link, the overall health of the blade server, and whether the server is set to give a blinking blue locator light from the locator button.

The removable drives also have LEDs indicating hard disk access activity and disk health.

You might find it helpful to refer back to the blade image in Cisco UCS B200 M6 Blade Server for the locations of these LEDs on the module faceplate.

Table 1. Blade Server LEDs

LED

Color

Description

Blade Power Button/LED (8 in the faceplate illustration)

Off

Power off.

Green

Main power state. Power is supplied to all server components and the server is operating normally.

Amber

Standby power state. Power is supplied only to the service processor so that the server can still be managed.

Note

 

The front-panel power button is disabled by default. It can be re-enabled through the UCS management software interface. After it's enabled, if you press and release the front-panel power button, the server performs an orderly shutdown of the 12 V main power and goes to standby power state. You cannot shut down standby power from the front-panel power button. For information about completely powering off the server from the software interface, see the configuration guide for UCS Manager or UCS Intersight Managed Mode.

Network Link Status (9 in the faceplate image)

Off

None of the network links are up.

Green

At least one network link is up.

Blade Health (10 in the faceplate image)

Off

Power off.

Green

Normal operation.

Amber

Minor error, degraded condition.

Examples of degraded condition:

  • Power supply redundancy lost

  • IO Module redundancy lost

  • Mismatched processors in the server (if the server can even boot)

  • Faulty processor in a dual processor server (if the server can even boot)

  • Memory RAS failure (if memory is configured for RAS).

  • Failed drive in RAID configuration.

Blinking Amber

Critical error.

Examples of critical condition:

  • Boot failure

  • Fatal processor or bus errors detected

  • Fatal uncorrectable memory error detected

  • Lost both drives

  • Excessive thermal conditions

Locator button/LED (11 in the faceplate image)

Off

Blinking is not enabled.

Blinking blue 1 Hz

Blinking to locate a selected bladeā€”If the LED is not blinking, the blade is not selected. You can control the blinking by using the UCS management software interface or the blue locator button/LED.

Disk Drive Activity (3 in the faceplate image)

Off

Inactive.

Solid Green

Drive is present

Blinking Green

Outstanding I/O activity for disk drive.

Disk Drive Fault (2 in the faceplate image)

Off

No fault detected.

Solid Amber

Fault detected or wrong type of drive detected.

Blinking Amber 4 hz

A drive rebuild is in actively in progress.

Blinking Amber 1 hz

Locator LED that provides a visual identifier the drive

Buttons

The front panel has the following buttons:

  • Power button/LED: The front-panel power button is disabled by default. It can be re-enabled through the UCS management software interface or by pressing the button.

    • UCS management software interface: After it's enabled, the power button allows you to manually take a server temporarily out of service but leave it in a standby state where it can be restarted quickly. If the desired power state for a service profile associated with a blade server is set to "off," using the power button or UCS management software interface to reset the server will cause the desired power state of the server to become out of sync with the actual power state and the server may unexpectedly shut down at a later time.


      Note


      To safely reboot a server from a power-down state, use the appropriate option to boot the server in the UCS management software interface.


    • Pressing the button: Depending on the server state, pressing the Power button powers the server up or powers it down:

      • If the server is powered down, momentarily press and release the button to power up the blade.

      • If the server is powered up, momentarily press and release the button to cause an orderly power down of the blade, or press and hold the button for more than 7 seconds to shut down the server immediately.

  • Locator button: You can activate the Locator beacon LED for an individual server by pressing the locator button/LED. This button toggles the Locator LED on or off depending on its current status.

Local Console Connection

The local console connector allows a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port autonegotiates to a maximum of 115200 baud on the connection.

The port uses the KVM dongle cable that provides a connection into a Cisco UCS blade server; it has a DB9 serial connector, a VGA connector for a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct connection to the operating system and the BIOS running on a blade server. A KVM cable ships standard with each blade chassis accessory kit.

Figure 2. KVM Cable for Blade Servers

1

Connector to blade server local console connection

2

DB9 serial connector

3

DB15 connector for a monitor

4

2-port Type A USB 2.0 connector for a mouse and keyboard

Front Mezzanine Storage Module Options

In the front mezzanine slot, the server can use one of the following front storage module options:

  • Cisco FlexStorage module supporting two 7 mm SATA SSDs. A 12G SAS controller chip is included on the module to provide hardware RAID 0/1 for the two drives. The controller interfaces with the blade through PCIe 3.0.

  • Cisco FlexStorage module supporting two hot pluggable 7 mm NVMe SSDs which interface with the server over PCIe at 2, 5, and 8 Gbps.

  • Cisco FlexStorage module supporting two mini-storage modules, which are not hot pluggable:

    • Mini-storage module "1" can be a SATA M.2 dual-SSD version that includes an on-board SATA RAID controller chip that manages the 2 M.2 dual SATA SSD drives. This mini-storage module option interfaces with the server over PCI 3.0.

    • Mini-storage module "2" can be only a SATA M.2 dual-SSD version that includes an on-board SATA RAID controller chip that manages the 2 M.2 dual SATA SSD drives. This mini-storage module option interfaces with the server over PCI 3.0.

Rear mLOM and Mezzanine Connectivity

There are multiple, configurable options for rear connectivity using the mLOM and mezzanine card options.

  • mLOM card: UCSB-MLOM-40G-04 or UCSB-ML-V5Q10G

  • mLOM card + pass-through mezzanine card: UCSB-MLOM-40G-04 + UCSB-MLOM-PT-01 or UCSB-ML-V5Q10G + UCSB-MLOM-PT-01

  • mLOM card + active mezzanine card: UCSB-MLOM-40G-04 + UCSB-VIC-M84-4P