- fltFabricComputeSlotEpMisplacedInChassisSlot
- fltFabricComputeSlotEpServerIdentificationProblem
- fltVnicEtherConfig-failed
- fltProcessorUnitInoperable
- fltProcessorUnitThermalNonCritical
- fltProcessorUnitThermalThresholdCritical
- fltProcessorUnitThermalThresholdNonRecoverable
- fltProcessorUnitVoltageThresholdNonCritical
- fltProcessorUnitVoltageThresholdCritical
- fltProcessorUnitVoltageThresholdNonRecoverable
- fltStorageLocalDiskInoperable
- fltStorageItemCapacityExceeded
- fltStorageItemCapacityWarning
- fltMemoryUnitDegraded
- fltMemoryUnitInoperable
- fltMemoryUnitThermalThresholdNonCritical
- fltMemoryUnitThermalThresholdCritical
- fltMemoryUnitThermalThresholdNonRecoverable
- fltMemoryArrayVoltageThresholdNonCritical
- fltMemoryArrayVoltageThresholdCritical
- fltMemoryArrayVoltageThresholdNonRecoverable
- fltAdaptorUnitUnidentifiable-fru
- fltAdaptorUnitMissing
- fltAdaptorUnitAdaptorReachability
- fltAdaptorHostIfLink-down
- fltAdaptorExtIfLink-down
- fltPortPIoLink-down
- fltPortPIoFailed
- fltPortPIoHardware-failure
- fltPortPIoSfp-not-present
- fltFabricExternalPcDown
- fltDcxVcDown
- fltNetworkElementInoperable
- fltMgmtEntityDegraded
- fltMgmtEntityDown
- fltDcxNsFailed
- fltComputePhysicalInsufficientlyEquipped
- fltComputePhysicalIdentityUnestablishable
- fltComputeBoardPowerError
- fltComputePhysicalPowerProblem
- fltComputePhysicalThermalProblem
- fltComputePhysicalBiosPostTimeout
- fltComputePhysicalDiscoveryFailed
- fltComputePhysicalAssociationFailed
- fltComputePhysicalInoperable
- fltComputePhysicalUnassignedMissing
- fltComputePhysicalAssignedMissing
- fltComputePhysicalUnidentified
- fltComputePhysicalUnassignedInaccessible
- fltComputePhysicalAssignedInaccessible
- fltLsServerFailed
- fltLsServerDiscoveryFailed
- fltLsServerConfigFailure
- fltLsServerMaintenanceFailed
- fltLsServerRemoved
- fltLsServerInaccessible
- fltLsServerAssociationFailed
- fltLsServerUnassociated
- fltLsServerServer-unfulfilled
- fltEtherSwitchIntFIoSatellite-connection-absent
- fltEtherSwitchIntFIoSatellite-wiring-problem
- fltEquipmentPsuPowerSupplyProblem
- fltEquipmentFanDegraded
- fltEquipmentFanInoperable
- fltEquipmentPsuInoperable
- fltEquipmentIOCardRemoved
- fltEquipmentFanModuleMissing
- fltEquipmentPsuMissing
- fltEquipmentIOCardThermalProblem
- fltEquipmentFanModuleThermalThresholdNonCritical
- fltEquipmentPsuThermalThresholdNonCritical
- fltEquipmentFanModuleThermalThresholdCritical
- fltEquipmentPsuThermalThresholdCritical
- fltEquipmentFanModuleThermalThresholdNonRecoverable
- fltEquipmentPsuThermalThresholdNonRecoverable
- fltEquipmentPsuVoltageThresholdNonCritical
- fltEquipmentPsuVoltageThresholdCritical
- fltEquipmentPsuVoltageThresholdNonRecoverable
- fltEquipmentPsuPerfThresholdNonCritical
- fltEquipmentPsuPerfThresholdCritical
- fltEquipmentPsuPerfThresholdNonRecoverable
- fltEquipmentFanPerfThresholdNonCritical
- fltEquipmentFanPerfThresholdCritical
- fltEquipmentFanPerfThresholdNonRecoverable
- fltEquipmentIOCardFirmwareUpgrade
- fltEquipmentChassisUnsupportedConnectivity
- fltEquipmentChassisUnacknowledged
- fltEquipmentIOCardUnsupportedConnectivity
- fltEquipmentIOCardUnacknowledged
- fltEquipmentIOCardPeerDisconnected
- fltEquipmentChassisIdentity
- fltEquipmentIOCardIdentity
- fltEquipmentFanModuleIdentity
- fltEquipmentPsuIdentity
- fltEquipmentChassisPowerProblem
- fltEquipmentChassisThermalThresholdCritical
- fltEquipmentChassisThermalThresholdNonRecoverable
- fltComputeBoardCmosVoltageThresholdCritical
- fltComputeBoardCmosVoltageThresholdNonRecoverable
- fltMgmtEntityElection-failure
- fltMgmtEntityHa-not-ready
- fltMgmtEntityVersion-incompatible
- fltEquipmentFanMissing
- fltEquipmentIOCardAutoUpgradingFirmware
- fltFirmwarePackItemImageMissing
- fltEtherSwitchIntFIoSatellite-wiring-numbers-unexpected
- fltMgmtEntityManagement-services-failure
- fltMgmtEntityManagement-services-unresponsive
- fltEquipmentChassisInoperable
- fltEtherServerIntFIoHardware-failure
- fltDcxVcMgmt-vif-down
- fltSysdebugMEpLogMEpLogLog
- fltSysdebugMEpLogMEpLogVeryLow
- fltSysdebugMEpLogMEpLogFull
- fltComputePoolEmpty
- fltUuidpoolPoolEmpty
- fltIppoolPoolEmpty
- fltMacpoolPoolEmpty
- fltFirmwareUpdatableImageUnusable
- fltFirmwareBootUnitCantBoot
- fltFcpoolInitiatorsEmpty
- fltEquipmentIOCardInaccessible
- fltDcxVIfLinkState
- fltEquipmentFanModuleDegraded
- fltEquipmentIOCardPost-failure
- fltEquipmentFanPerfThresholdLowerNonRecoverable
- fltComputePhysicalPost-failure
- fltEquipmentPsuOffline
- fltStorageRaidBatteryInoperable
- fltSysdebugMEpLogTransferError
- fltComputeRtcBatteryInoperable
- fltMemoryBufferUnitThermalThresholdNonCritical
- fltMemoryBufferUnitThermalThresholdCritical
- fltMemoryBufferUnitThermalThresholdNonRecoverable
- fltComputeIOHubThermalNonCritical
- fltComputeIOHubThermalThresholdCritical
- fltComputeIOHubThermalThresholdNonRecoverable
- fltEquipmentChassisIdentity-unestablishable
- fltSwVlanPortNsResourceStatus
- fltFabricLanPinGroupEmpty
- fltAdaptorExtEthIfMisConnect
- fltAdaptorHostEthIfMisConnect
- fltPowerBudgetPowerBudgetCmcProblem
- fltPowerBudgetPowerBudgetBmcProblem
- fltPowerBudgetPowerBudgetDiscFail
- fltPowerGroupPowerGroupInsufficientBudget
- fltPowerGroupPowerGroupBudgetIncorrect
- fltMgmtIfMisConnect
- fltLsComputeBindingAssignmentRequirementsNotMet
- fltEquipmentFexPost-failure
- fltEquipmentFexIdentity
- fltAdaptorHostEthIfMissing
- fltPortPIoInvalid-sfp
- fltMgmtIfMissing
- fltFabricEthLanPcEpDown
- fltEquipmentIOCardThermalThresholdNonCritical
- fltEquipmentIOCardThermalThresholdCritical
- fltEquipmentIOCardThermalThresholdNonRecoverable
- fltEquipmentChassisSeeprom-inoperable
- fltExtmgmtIfMgmtifdown
- fltPowerChassisMemberPowerGroupCapInsufficient
- fltPowerChassisMemberChassisFirmwareProblem
- fltPowerChassisMemberChassisPsuInsufficient
- fltPowerChassisMemberChassisPsuRedundanceFailure
- fltPowerBudgetPowerCapReachedCommit
- fltSysdebugAutoCoreFileExportTargetAutoCoreTransferFailure
- fltFabricMonSpanConfigFail
- fltPowerBudgetChassisPsuInsufficient
- fltPowerBudgetTStateTransition
- fltPowerPolicyPowerPolicyApplicationFail
- fltMgmtIfNew
- fltAdaptorExtEthIfMissing
- fltStorageLocalDiskSlotEpUnusable
- fltFabricEthEstcPcEpDown
- fltEquipmentFexIdentity-unestablishable
- fltEquipmentFanModuleInoperable
- fltLsmaintMaintPolicyUnresolvableScheduler
- fltProcessorUnitIdentity-unestablishable
- fltIqnpoolPoolEmpty
- fltFabricDceSwSrvPcEpDown
- fltFabricEpMgrEpTransModeFail
- fltFabricPIoEpErrorMisconfigured
- fltFabricEthLanEpMissingPrimaryVlan
- fltFabricEthLanPcMissingPrimaryVlan
- fltVnicEtherPinningMismatch
- fltVnicEtherPinningMisconfig
- fltProcessorUnitDisabled
- fltMemoryUnitDisabled
- fltFirmwareBootUnitActivateStatusFailed
- fltFabricInternalPcDown
- fltMgmtEntityDevice-1-shared-storage-error
- fltMgmtEntityDevice-2-shared-storage error
- fltMgmtEntityDevice-3-shared-storage error
- fltMgmtEntityHa-ssh-keys-mismatched
- fltComputeBoardPowerFail
- fltVmVifLinkState
- fltEquipmentPsuPowerSupplyShutdown
- fltEquipmentPsuPowerThreshold
- fltEquipmentPsuInputError
- fltNetworkElementInventoryFailed
- fltAdaptorUnitExtnUnidentifiable-fru
- fltAdaptorUnitExtnMissing
- fltEquipmentFexFex-unsupported
- fltVnicIScsiConfig-failed
- fltPkiKeyRingStatus
- fltPkiTPStatus
- fltComputePhysicalDisassociationFailed
- fltComputePhysicalNetworkMisconfigured
- fltVnicProfileProfileConfigIncorrect
- fltVnicEtherIfVlanAccessFault
- fltVnicEtherIfVlanUnresolvable
- fltVnicEtherIfInvalidVlan
- fltFabricVlanVlanConflictPermit
- fltFabricVlanReqVlanPermitUnresolved
- fltFabricVlanGroupReqVlanGroupPermitUnresolved
- fltExtpolClientClientLostConnectivity
- fltStorageLocalDiskDegraded
- fltStorageRaidBatteryDegraded
- fltStorageRaidBatteryRelearnAborted
- fltStorageRaidBatteryRelearnFailed
- fltStorageInitiatorConfiguration-error
- fltStorageControllerPatrolReadFailed
- fltStorageControllerInoperable
- fltStorageLocalDiskRebuildFailed
- fltStorageLocalDiskCopybackFailed
- fltStorageVirtualDriveInoperable
- fltStorageVirtualDriveDegraded
- fltStorageVirtualDriveReconstructionFailed
- fltStorageVirtualDriveConsistencyCheckFailed
- fltAaaProviderGroupProvidergroup
- fltAaaConfigServergroup
- fltAaaRoleRoleNotDeployed
- fltAaaLocaleLocaleNotDeployed
- fltAaaUserRoleUserRoleNotDeployed
- fltAaaUserLocaleUserLocaleNotDeployed
- fltPkiKeyRingKeyRingNotDeployed
- fltCommSnmpSyscontactEmpty
- fltCommDateTimeCommTimeZoneInvalid
- fltAaaUserLocalUserNotDeployed
- fltCommSnmpUserSnmpUserNotDeployed
- fltCommSvcEpCommSvcNotDeployed
- fltSwVlanPortNsVLANCompNotSupport
- fltPolicyControlEpSuspendModeActive
- fltNetworkElementThermalThresholdCritical
- fltFabricPinTargetDown
- fltFabricEthLanEpOverlapping-vlan
- fltFabricEthLanPcOverlapping-vlan
- fltFabricVlanMisconfigured-mcast-policy
- fltMgmtConnectionDisabled
- fltMgmtConnectionUnused
- fltMgmtConnectionUnsupportedConnectivity
- fltCallhomeEpNoSnmpPolicyForCallhome
- fltCapabilityCatalogueLoadErrors
- fltExtmgmtArpTargetsArpTargetsNotValid
- fltExtpolClientGracePeriodWarning
- fltExtpolClientGracePeriodWarning2
- fltExtpolClientGracePeriodWarning3
- fltExtpolClientGracePeriodWarning4
- fltExtpolClientGracePeriodWarning5
- fltExtpolClientGracePeriodWarning6
- fltExtpolClientGracePeriodWarning7
- fltExtpolClientGracePeriodWarning1
- fltStorageItemFilesystemIssues
- fltPkiKeyRingModulus
- fltAaaOrgLocaleOrgNotPresent
- fltNetworkOperLevelExtraprimaryvlans
- fltEquipmentHealthLedCriticalError
- fltEquipmentHealthLedMinorError
- fltVnicEtherIfRemoteVlanUnresolvable
- fltVnicEtherVirtualization-conflict
- fltLsIssuesIscsi-config-failed
- fltStorageLocalDiskMissing
- fltStorageFlexFlashControllerInoperable
- fltStorageFlexFlashCardInoperable
- fltStorageFlexFlashCardMissing
- fltStorageFlexFlashVirtualDriveDegraded
- fltStorageFlexFlashVirtualDriveInoperable
- fltStorageFlexFlashControllerUnhealthy
- fltAaaProviderGroupProvidergroupsize
- fltFirmwareAutoSyncPolicyDefaultHostPackageMissing
- fltFabricNetflowMonSessionFlowMonConfigFail
- fltFabricNetflowMonSessionNetflowSessionConfigFail
- fltFabricPooledVlanNamedVlanUnresolved
- fltExtvmmVMNDRefVmNetworkReferenceIncorrect
- fltExtmgmtNdiscTargetsNdiscTargetsNotValid
- fltFirmwareBootUnitPowerCycleRequired
- fltMgmtControllerUnsupportedDimmBlacklisting
- fltFabricEthLanEpUdldLinkDown
- fltFabricEthLanPcEpUdldLinkDown
- fltEquipmentChassisInvalid-fru
- fltEquipmentSwitchIOCardRemoved
- fltEquipmentSwitchIOCardThermalProblem
- fltEquipmentSwitchIOCardThermalThresholdNonCritical
- fltEquipmentSwitchIOCardThermalThresholdCritical
- fltEquipmentSwitchIOCardThermalThresholdNonRecoverable
- fltEquipmentSwitchIOCardIdentity
- fltEquipmentSwitchIOCardCpuThermalThresholdCritical
- fltPowerBudgetChassisPsuMixedMode
- fltNetworkElementRemoved
- fltNetworkOperLevelExtrasecondaryvlans
- fltSwVlanExtrasecondaryvlansperprimary
- fltMgmtBackupPolicyConfigConfiguration backup outdated
- fltFirmwareStatusCimcFirmwareMismatch
- fltFirmwareStatusPldFirmwareMismatch
- fltVnicEtherVirtualization-netflow-conflict
- fltSysdebugLogExportStatusLogExportFailure
- fltLsServerSvnicNotPresent
- fltLsIssuesKvmPolicyUnsupported
- fltComputeABoardThermalProblem
- fltComputeABoardPowerUsageProblem
- fltComputeABoardMotherBoardVoltageThresholdUpperNonRecoverable
- fltComputeABoardMotherBoardVoltageThresholdLowerNonRecoverable
- fltComputeABoardMotherBoardVoltageUpperThresholdCritical
- fltComputeABoardMotherBoardVoltageLowerThresholdCritical
- fltCimcvmediaActualMountEntryVmediaMountFailed
- fltFabricVlanPrimaryVlanMissingForIsolated
- fltFabricVlanPrimaryVlanMissingForCommunity
- fltFabricVlanMismatch-a
- fltFabricVlanMismatch-b
- fltFabricVlanErrorAssocPrimary
- fltStorageMezzFlashLifeConfiguration-error
- fltStorageMezzFlashLifeDegraded
- fltStorageFlexFlashControllerMismatch
- fltStorageFlexFlashDriveUnhealthy
- fltStorageFlexFlashCardUnhealthy
- fltMgmtInterfaceNamedInbandVlanUnresolved
- fltMgmtInterfaceInbandUnsupportedServer
- fltMgmtInterfaceInbandUnsupportedFirmware
- fltComputePhysicalAdapterMismatch
- fltEquipmentSwitchCardAct2LiteFail
- fltEquipmentTpmSlaveTpm
- fltPoolElementDuplicatedAssigned
- fltSwVlanPortNsResourceStatusWarning
- fltNetworkElementMemoryerror
- fltMgmtPmonEntryFPRM process failure
- fltSmSlotSmaHeartbeat
- fltSmSlotBladeNotWorking
- fltSmSlotDiskFormatFailed
- fltSmSlotBladeSwap
- fltOsControllerFailedBladeBootup
- fltOsControllerFailedBootupRecovery
- fltFirmwarePlatformPackBundleVersionMissing
- fltSmSecSvcSwitchConfigFail
- fltSmLogicalDeviceIncompleteConfig
- fltSmLogicalDeviceLogicalDeviceError
- fltEtherFtwPortPairBypass
- fltCommDateTimeCommNtpConfigurationFailed
- fltSmConfigIssueLogicalDeviceConfigError
- fltSmAppAppImageCorrupted
- fltEquipmentXcvrNonSupportedXcvr
- fltFabricSspEthMonDelAllSessEnabled
- fltIpsecConnectionIpsecConnInvalidKey
- fltIpsecConnectionIpsecConnInvalidCert
- fltIpsecAuthorityIpsecAuthorInvalidTp
- fltSmHotfixHotfixInstallFailed
- fltSmHotfixHotfixError
- fltSmErrorError
- fltSmCloudConnectorCloudRegistrationFailed
- fltSmCloudConnectorCloudUnregistrationFailed
- fltSmUnsignedCspLicenseUnsignedCSPLicenseInstalled
- fltSdLinkVnicConfigFail
- fltNwctrlCardConfigOffline
- fltNwctrlCardConfigFailed
- fltNwctrlCardConfigError
- fltNwctrlCardConfigOirFailed
- fltNwctrlCardConfigOirInvalid
- fltNwctrlCardConfigRemoval
- fltNwctrlCardConfigMismatch
- fltNwctrlCardConfigSupriseRemoval
- fltFirmwareRunnableAdapterUpgradeRequired
- fltSmClusterBootstrapCclSubnetNotSupported
- fltSmAppInstanceFailedConversion
- fltSmAppInstance2AppNotResponding
- fltSmAppInstance2AppInstallFailed
- fltSmAppInstance2AppStartFailed
- fltSmAppInstance2AppUpdateFailed
- fltSmAppInstance2AppStopFailed
- fltSmAppInstance2AppNotInstalled
- fltSmAppInstance2AppInstanceError
- fltSmAppInstance2AppInstanceUnsupported
- fltSmAppInstance2SoftwareIncompatible
- fltNetworkElementSamconfig
- fltSmAppInstance2AppFaultState
- fltSmExternalPortLinkConflictConfig
- fltSmSlotAdapter2NotResponding
- fltSmHwCryptoHwCryptoNotOperable
- fltPkiKeyRingEc
- fltCommTelemetryTelemetryRegistrationFailed
- fltCommTelemetryTelemetryUnregistrationFailed
- fltCommTelemetryTelemetryGetDataFailed
- fltCommTelemetryTelemetrySendDataFailed
- fltAaaUserEpPasswordEncryptionKeyNotSet
- fltSdInternalMgmtBootstrapInternalMgmtVnicConfigFail
- fltSdExternalLduLinkExternalLduLinkVnicConfigFail
- fltSdAppLduLinkAppLduLinkEndpoint1VnicConfigFail
- fltSdAppLduLinkAppLduLinkEndpoint2VnicConfigFail
- fltSdPreAllocatedVnicVnicPreAllocationFail
- fltFirmwareVersionIssueImageVersionMismatch
- fltFabricComputeSlotEpBladeDecommissionFail
- fltEtherFtwPortPairPhyBypass
- fltEtherFtwPortPairPhyBypassErr
- fltMgmtImporterConfiguration import failed
FXOS Faults
This chapter provides information about the faults that may be raised in FXOS.
fltFabricComputeSlotEpMisplacedInChassisSlot
Server, vendor([vendor]), model([model]), serial([serial]) in slot [chassisId]/[slotId] presence: [presence]
This fault typically occurs when Cisco FPR Manager detects a server in a chassis slot that does not match what was previously equipped in the slot.
If you see this fault, take the following actions:
Step 1 If the previous server was intentionally removed and a new one was inserted, reacknowledge the server.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricComputeSlotEpServerIdentificationProblem
Problem identifying server in slot [chassisId]/[slotId]
This fault typically occurs when Cisco FPR Manager encountered a problem identifying the server in a chassis slot.
If you see this fault, take the following actions:
Step 1 Remove and reinsert the server.
Step 2 Reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltVnicEtherConfig-failed
Eth vNIC [name], service profile [name] failed to apply configuration
This fault typically occurs when Cisco FPR Manager could not place the vNIC on the vCon.
If you see this fault, take the following actions:
Step 1 Verify that the server was successfully discovered.
Step 2 Verify that the correct type of adapters are installed on the server.
Step 3 Confirm that the vCon assignment is correct.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitInoperable
Processor [id] on server [chassisId]/[slotId] operability: [operability]
This fault occurs in the unlikely event that processor is inoperable.
If you see this fault, take the following actions:
Step 1 If the fault occurs on a blade server processor, remove the server from the chassis and then reinsert it.
Step 2 In Cisco FPR Manager, decommission and then recommission the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitThermalNonCritical
Processor [id] on server [chassisId]/[slotId] temperature: [thermal]Processor [id] on server [id] temperature: [thermal]
This fault occurs when the processor temperature on a blade or rack server exceeds a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitThermalThresholdCritical
Processor [id] on server [chassisId]/[slotId] temperature: [thermal]Processor [id] on server [id] temperature: [thermal]
This fault occurs when the processor temperature on a blade or rack server exceeds a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitThermalThresholdNonRecoverable
Processor [id] on server [chassisId]/[slotId] temperature: [thermal]Processor [id] on server [id] temperature: [thermal]
This fault occurs when the processor temperature on a blade or rack server has been out of the operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitVoltageThresholdNonCritical
Processor [id] on server [chassisId]/[slotId] voltage: [voltage]Processor [id] on server [id] voltage: [voltage]
This fault occurs when the processor voltage is out of normal operating range, but hasn’t yet reached a critical stage. Normally the processor recovers itself from this situation
If you see this fault, take the following actions:
Step 1 Monitor the processor for further degradation.
Step 2 If the fault occurs on a blade server processor, remove the server from the chassis and then reinsert it.
Step 3 In Cisco FPR Manager, decommission and then recommission the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitVoltageThresholdCritical
Processor [id] on server [chassisId]/[slotId] voltage: [voltage]Processor [id] on server [id] voltage: [voltage]
This fault occurs when the processor voltage has exceeded the specified hardware voltage rating.
If you see this fault, take the following actions:
Step 1 If the fault occurs on a blade server processor, remove the server from the chassis and then reinsert it.
Step 2 In Cisco FPR Manager, decommission and then recommission the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitVoltageThresholdNonRecoverable
Processor [id] on server [chassisId]/[slotId] voltage: [voltage]Processor [id] on server [id] voltage: [voltage]
This fault occurs when the processor voltage has exceeded the specified hardware voltage rating and may cause processor hardware damage or jeopardy.
If you see this fault, take the following actions:
Step 1 If the fault occurs on a blade server processor, remove the server from the chassis and then reinsert it.
Step 2 In Cisco FPR Manager, decommission and then recommission the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageLocalDiskInoperable
Local disk [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Local disk [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the local disk has become inoperable.
If you see this fault, take the following actions:
Step 1 Insert the disk in a supported slot.
Step 2 Remove and reinsert the local disk.
Step 3 Replace the disk, if an additional disk is available.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageItemCapacityExceeded
Disk usage for partition [name] on fabric interconnect [id] exceeded 70%
This fault occurs when the partition disk usage exceeds 70% but is less than 90%.
If you see this fault, take the following actions:
Step 1 Reduce the partition disk usage to less than 70% by deleting unused and unnecessary files.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageItemCapacityWarning
Disk usage for partition [name] on fabric interconnect [id] exceeded 90%
This fault occurs when the partition disk usage exceeds 90%.
If you see this fault, take the following actions:
Step 1 Reduce the partition disk usage to less than 90% by deleting unused and unnecessary files.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitDegraded
DIMM [location] on server [chassisId]/[slotId] operability: [operability]DIMM [location] on server [id] operability: [operability]
This fault occurs when a DIMM is in a degraded operability state. This state typically occurs when an excessive number of correctable ECC errors are reported on the DIMM by the server BIOS.
If you see this fault, take the following actions:
Step 1 Monitor the error statistics on the degraded DIMM through Cisco FPR Manager. If the high number of errors persists, there is a high possibility of the DIMM becoming inoperable.
Step 2 If the DIMM becomes inoperable, replace the DIMM.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitInoperable
DIMM [location] on server [chassisId]/[slotId] operability: [operability]DIMM [location] on server [id] operability: [operability]
This fault typically occurs because an above threshold number of correctable or uncorrectable errors has occurred on a DIMM. The DIMM may be inoperable.
If you see this fault, take the following actions:
Step 1 If the SEL is enabled, review the SEL statistics on the DIMM to determine which threshold was crossed.
Step 2 If necessary, replace the DIMM.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitThermalThresholdNonCritical
DIMM [location] on server [chassisId]/[slotId] temperature: [thermal]DIMM [location] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory unit on a blade or rack server exceeds a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitThermalThresholdCritical
DIMM [location] on server [chassisId]/[slotId] temperature: [thermal]DIMM [location] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory unit on a blade or rack server exceeds a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitThermalThresholdNonRecoverable
DIMM [location] on server [chassisId]/[slotId] temperature: [thermal]DIMM [location] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory unit on a blade or rack server has been out of the operating range, and the issue is not recoverable.Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryArrayVoltageThresholdNonCritical
Memory array [id] on server [chassisId]/[slotId] voltage: [voltage]Memory array [id] on server [id] voltage: [voltage]
This fault occurs when the memory array voltage is out of normal operating range, but hasn’t yet reached a critical stage. Typically the memory array recovers itself from this situation.
If you see this fault, take the following actions:
Step 1 If the SEL is enabled, look at the SEL statistics on the DIMM to determine which threshold was crossed.
Step 2 Monitor the memory array for further degradation.
Step 3 If the fault occurs on a blade server memory array, remove the blade and re-insert into the chassis.
Step 4 In Cisco FPR Manager, decommission and recommission the server.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryArrayVoltageThresholdCritical
Memory array [id] on server [chassisId]/[slotId] voltage: [voltage]Memory array [id] on server [id] voltage: [voltage]
This fault occurs when the memory array voltage exceeds the specified hardware voltage rating
If you see this fault, take the following actions:
Step 1 If the SEL is enabled, look at the SEL statistics on the DIMM to determine which threshold was crossed.
Step 2 Monitor the memory array for further degradation.
Step 3 If the fault occurs on a blade server memory array, remove the blade and re-insert into the chassis.
Step 4 In Cisco FPR Manager, decommission and recommission the server.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryArrayVoltageThresholdNonRecoverable
Memory array [id] on server [chassisId]/[slotId] voltage: [voltage]Memory array [id] on server [id] voltage: [voltage]
This fault occurs when the memory array voltage exceeded the specified hardware voltage rating and potentially memory hardware may be in damage or jeopardy
If you see this fault, take the following actions:
Step 1 If the SEL is enabled, review the SEL statistics on the DIMM to determine which threshold was crossed.
Step 2 Monitor the memory array for further degradation.
Step 3 If the fault occurs on a blade server memory array, remove the server from the chassis and re-insert it.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorUnitUnidentifiable-fru
Adapter [id] in server [id] has unidentified FRUAdapter [id] in server [chassisId]/[slotId] has unidentified FRU
This fault typically occurs because Cisco FPR Manager has detected an unsupported adapter. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that a supported adapter is installed.
Step 2 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorUnitMissing
Adapter [id] in server [id] presence: [presence]Adapter [id] in server [chassisId]/[slotId] presence: [presence]
The adaptor is missing. Cisco FPR Manager raises this fault when any of the following scenarios occur:
- The endpoint reports there is no adapter in the adaptor slot.
- The endpoint cannot detect or communicate with the adapter in the adaptor slot.
If you see this fault, take the following actions:
Step 1 Make sure an adapter is inserted in the adaptor slot in the server.
Step 2 Check whether the adaptor is connected and configured properly and is running the recommended firmware version.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorUnitAdaptorReachability
Adapter [id]/[id] is unreachableAdapter [chassisId]/[slotId]/[id] is unreachable
Cisco FPR Manager cannot access the adapter. This fault typically occurs as a result of one of the following issues:
- The server does not have sufficient power.
- The I/O module is not functional.
- The adapter firmware has failed.
- The adapter is not functional
If you see this fault, take the following actions:
Step 1 Check the POST results for the server. In Cisco FPR Manager GUI, you can access the POST results from the General tab for the server. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the server.
Step 2 In Cisco FPR Manager, check the power state of the server.
Step 3 Verify that the physical server has the same power state.
Step 4 If the server is off, turn the server on.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorHostIfLink-down
Adapter [transport] host interface [id]/[id]/[id] link state: [linkState], Associated external interface link state: [vcAdminState]Adapter [transport] host interface [chassisId]/[slotId]/[id]/[id] link state: [linkState], Associated external interface link state: [vcAdminState]
This fault typically occurs as a result of one of the following issues:
- The fabric interconnect is in End-Host mode, and all uplink ports failed.
- The server port to which the adapter is pinned failed.
- A transient error caused the link to fail.
If you see this fault, take the following actions:
Step 1 If an associated port is disabled, enable the port.
Step 2 Check the associated port to ensure it is in up state.
Step 3 Reacknowledge the server with the adapter that has the failed link.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
Step 5 vnic link state will be down, when its associated external interface is down.The fault should not be displayed regardless if it is administratively disabled or if it is operationally not up(i.e. physical link is detected as down).
fltAdaptorExtIfLink-down
Adapter uplink interface [chassisId]/[slotId]/[id]/[id] on security module [slotId] link state: [linkState]. Please check switch blade-facing port status. Resetting security module might be required.
The link for a network facing adapter interface is down. Cisco FPR Manager raises this fault when any of the following scenarios occur:
- Cisco FPR Manager cannot establish and/or validate the adapter’s connectivity to any of the fabric interconnects.
- The endpoint reports a link down or vNIC down event on the adapter link.
- The endpoint reports an errored link state or errored vNIC state event on the adapter link.
If you see this fault, take the following actions:
Step 1 Verify that the adapter is connected, configured properly, and is running the recommended firmware version.
Step 2 If the server is stuck at discovery, decommission the server and reacknowledge the server slot.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPortPIoLink-down
[transport] port [portId] on chassis [id] oper state: [operState], reason: [stateQual][transport] port [slotId]/[aggrPortId]/[portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual][transport] port [slotId]/[portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual]
This fault occurs when a fabric interconnect port is in link-down state. This state impacts the traffic destined for the port.
If you see this fault, take the following actions:
Step 1 Verify that the physical link is properly connected between the fabric interconnect and the peer component.
Step 2 Verify that the configuration on the peer entity is properly configured and matches the fabric interconnect port configuration.
Step 3 Unconfigure and re-configure the port.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPortPIoFailed
[transport] port [portId] on chassis [id] oper state: [operState], reason: [stateQual][transport] port [slotId]/[aggrPortId]/[portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual][transport] port [slotId]/[portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual]
This fault is raised on fabric interconnect ports and on server-facing ports on an IOM or a FEX module when FPRM detects that the port is not up and in failed state while it is expected to be up since it has been enabled by user and there is no known hardware failure or missing SFP issue and port license is valid. Additional reason is displayed by the fault description string.
If you see this fault, Corrective action maybe taken based on reason information in the fault description whenever such a reason is displayed. If the fault description displays reason as "ENM source pinning failed" then it means that the fabric interconnect is operating in End-host Node Mode and the uplink port that this server facing port is pinned to is down or does not have appropriate VLAN configured. In case of such an error for an appliance port check the VLAN configuration on uplink port. A VLAN with same id as the one on the appliance port will also need to be configured on the uplink port. After setting the configuration right if you still see the fault then create a show tech-support file for Cisco FPR Manager and the chassis or FEX module, and then contact Cisco TAC.
fltPortPIoHardware-failure
[transport] port [portId] on chassis [id] oper state: [operState], reason: hardware-failure[transport] port [slotId]/[aggrPortId]/[portId] on fabric interconnect [id] oper state: [operState], reason: hardware-failure[transport] port [slotId]/[portId] on fabric interconnect [id] oper state: [operState], reason: hardware-failure
This fault is raised on fabric interconnect ports and server-facing ports on an IOM or a FEX module when the system detects a hardware failure.
If you see this fault, create a show tech-support file for Cisco FPR Manager and the chassis or FEX module, and then contact Cisco TAC.
fltPortPIoSfp-not-present
[transport] port [portId] on chassis [id] oper state: [operState][transport] port [slotId]/[aggrPortId]/[portId] on fabric interconnect [id] oper state: [operState][transport] port [slotId]/[portId] on fabric interconnect [id] oper state: [operState]
When a fabric interconnect port is not in an unconfigured state, an SFP is required for its operation. This fault is raised to indicate that the SFP is missing from a configured port.
If you see this fault, insert a supported SFP into the port on the fabric interconnect. A list of supported SFPs can be found on www.Cisco.com.
fltFabricExternalPcDown
[type] port-channel [portId] on fabric interconnect [switchId] oper state: [operState], reason: [stateQual][type] port-channel [portId] on fabric interconnect [switchId] oper state: [operState], reason: [stateQual]
This fault typically occurs when a fabric interconnect reports that a fabric port channel is operationally down.
If you see this fault, take the following actions:
Step 1 Verify that the member ports in the fabric port channel are administratively up and operational. Check the link connectivity for each port.
Step 2 If connectivity seems correct, check the operational states on the peer switch ports of the port channel members.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltDcxVcDown
[transport] VIF [id] on server [chassisId] / [slotId] of switch [switchId] down, reason: [stateQual][transport] VIF [id] on server [id] of switch [switchId] down, reason: [stateQual]
This fault typically occurs when a fabric interconnect reports one of the following connectivity states for a virtual interface:
If you see this fault, take the following actions:
Step 1 Verify that the uplink physical interface is up.
Step 2 Check the associated port to ensure it is in up state.
Step 3 If the vNIC/vHBA is configured for a pin group, verify that the pin group targets are configured correctly.
Step 4 In the Network Control Policy for the vNIC, verify that the ’Action on Uplink Fail’ field is set to ’warning’.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltNetworkElementInoperable
Fabric Interconnect [id] operability: [operability]
This fault typically occurs when the fabric interconnect cluster controller reports that the membership state of the fabric interconnect is down, indicating that the fabric interconnect is inoperable.
If you see this fault, take the following actions:
Step 1 Verify that both fabric interconnects in the cluster are running the same Kernel and System software versions.
Step 2 Verify that the fabric interconnects software version and the Cisco FPR Manager software versions are the same.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityDegraded
Fabric Interconnect [id], HA Cluster interconnect link failure
This fault occurs when one of the cluster links (either L1 or L2) of a fabric interconnect is not operationally up. This issue impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that both L1 and L2 links are properly connected between the fabric interconnects.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityDown
Fabric Interconnect [id], HA Cluster interconnect total link failure
This fault occurs when both cluster links (L1 and L2) of the fabric interconnects are in a link-down state. This issue impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that both L1 and L2 links are properly connected between the fabric interconnects.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltDcxNsFailed
Server [chassisId]/[slotId] (service profile: [assignedToDn]) virtual network interface allocation failed.Server [id] (service profile: [assignedToDn]) virtual network interface allocation failed.
The adapter’s vif-namespace activation failed due to insufficient resources. Cisco FPR Manager raises this fault when the number of deployed VIF resources exceeds the maximum VIF resources available on the adapter connected to the fabric interconnect.
If you see this fault, take the following actions:
Step 1 Check the NS "size" and "used" resources to determine by how many vNICs the adapter exceeded the maximum.
Step 2 Unconfigure or delete all vNICs on the adapter above the maximum number.
Step 3 Add additional fabric uplinks from the IOM to the corresponding fabric interconnect and reacknowledge the chassis. This increases the "NS size" on the adapter.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalInsufficientlyEquipped
Server [id] (service profile: [assignedToDn]) has insufficient number of DIMMs, CPUs and/or adaptersServer [chassisId]/[slotId] (service profile: [assignedToDn]) has insufficient number of DIMMs, CPUs and/or adapters
This fault typically occurs because Cisco FPR Manager has detected that the server has an insufficient number of DIMMs, CPUs, and/or adapters.
If you see this fault, take the following actions:
Step 1 Verify that the DIMMs are installed in a supported configuration.
Step 2 Verify that an adapter and CPU are installed.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalIdentityUnestablishable
Server [id] (service profile: [assignedToDn]) has an invalid FRUServer [chassisId]/[slotId] (service profile: [assignedToDn]) has an invalid FRU
This fault typically occurs because Cisco FPR Manager has detected an unsupported server or CPU.
If you see this fault, take the following actions:
Step 1 Verify that a supported server and/or CPU is installed.
Step 2 Verify that the Cisco FPR Manager capability catalog is up to date.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeBoardPowerError
Motherboard of server [chassisId]/[slotId] (service profile: [assignedToDn]) power: [operPower]Motherboard of server [id] (service profile: [assignedToDn]) power: [operPower]
This fault typically occurs when the server power sensors have detected a problem.
If you see this fault, take the following actions:
Step 1 Make sure that the server is correctly installed in the chassis and that all cables are secure.
Step 2 If you reinstalled the server, reacknowledge it.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalPowerProblem
Server [id] (service profile: [assignedToDn]) oper state: [operState]Server [chassisId]/[slotId] (service profile: [assignedToDn]) oper state: [operState]
This fault typically occurs when the server power sensors have detected a problem.
If you see this fault, take the following actions:
Step 1 Make sure that the server is correctly installed in the chassis and that all cables are secure.
Step 2 If you reinstalled the server, reacknowledge it.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalThermalProblem
Server [id] (service profile: [assignedToDn]) oper state: [operState]Server [chassisId]/[slotId] (service profile: [assignedToDn]) oper state: [operState]
This fault typically occurs when the server thermal sensors have detected a problem.
If you see this fault, take the following actions:
Step 1 Make sure that the server fans are working properly.
Step 2 Wait for 24 hours to see if the problem resolves itself.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalBiosPostTimeout
Server [id] (service profile: [assignedToDn]) BIOS failed power-on self testBlade [chassisId]/[slotId] (service profile: [assignedToDn]) BIOS failed power-on self test
This fault typically occurs when the server has encountered a diagnostic failure.
If you see this fault, take the following actions:
Step 1 Check the POST results for the server. In Cisco FPR Manager GUI, you can access the POST results from the General tab for the server. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the server.
Step 2 Reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalDiscoveryFailed
Server [id] (service profile: [assignedToDn]) discovery: [discovery]Server [chassisId]/[slotId] (service profile: [assignedToDn]) discovery: [discovery]
This fault typically occurs for one of the following reasons:
- The shallow discovery that occurs when the server associated with service profile failed.
- The server is down.
- The data path is not working.
- Cisco FPR Manager cannot communicate with the CIMC on the server.
- The server cannot communicate with the fabric interconnect.
If you see this fault, take the following actions:
Step 1 Check the FSM tab and the current state of the server and any FSM operations.
Step 2 Check the error descriptions and see if any server components indicate a failure.
Step 3 If the server or a server component has failed, do the following:
a. Check the operational state of the server.
b. If the server is not operable, re-acknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalAssociationFailed
Service profile [assignedToDn] failed to associate with server [id]Service profile [assignedToDn] failed to associate with server [chassisId]/[slotId]
This fault typically occurs for one of the following reasons:
- The service profile could not be associated with the server.
- The server is down.
- The data path is not working.
- Cisco FPR Manager cannot communicate with one or more of the fabric interconnect, the server, or a component on the server.
If you see this fault, take the following actions:
Step 1 Check the FSM tab and the current state of the server and any FSM operations.
Step 2 If the server is stuck in an inappropriate state, such as booting, power cycle the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalInoperable
Server [id] (service profile: [assignedToDn]) health: [operability]Server [chassisId]/[slotId] (service profile: [assignedToDn]) health: [operability]
This fault typically occurs when the server has encountered a diagnostic failure.
If you see this fault, take the following actions:
Step 1 Check the POST results for the server.In Cisco FPR Manager GUI, you can access the POST results from the General tab for the server. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the server.
Step 2 Reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalUnassignedMissing
Server [id] (no profile) missingServer [chassisId]/[slotId] (no profile) missing
This fault typically occurs when the server, which is not associated with a service profile, was previously physically inserted in the slot, but cannot be detected by Cisco FPR Manager.
If you see this fault, take the following actions:
Step 1 If the server is physically present in the slot, remove and then reinsert it.
Step 2 If the server is not physically present in the slot, insert it.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalAssignedMissing
Server [id] (service profile: [assignedToDn]) missingServer [chassisId]/[slotId] (service profile: [assignedToDn]) missing
This fault typically occurs when the server, which is associated with a service profile, was previously physically inserted in the slot, but cannot be detected by Cisco FPR Manager.
If you see this fault, take the following actions:
Step 1 If the server is physically present in the slot, remove and then reinsert it.
Step 2 If the server is not physically present in the slot, reinsert it.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalUnidentified
Server [id] (service profile: [assignedToDn]) has an invalid FRU: [presence]Server [chassisId]/[slotId] (service profile: [assignedToDn]) has an invalid FRU: [presence]
This fault typically occurs because Cisco FPR Manager has detected an unsupported server or CPU.
If you see this fault, take the following actions:
Step 1 Verify that a supported server and/or CPU is installed.
Step 2 Verify that the Cisco FPR Manager capability catalog is up to date.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalUnassignedInaccessible
Server [id] (no profile) inaccessibleServer [chassisId]/[slotId] (no profile) inaccessible
This fault typically occurs when the server, which is not associated with a service profile, has lost connection to the fabric interconnects. This fault occurs if there are communication issues between the server CIMC and the fabric interconnects.
If you see this fault, take the following actions:
Step 1 Wait a few minutes to see if the fault clears. This is typically a temporary issue, and can occur after a firmware upgrade.
Step 2 If the fault does not clear after a brief time, remove the server and then reinsert it.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalAssignedInaccessible
Server [id] (service profile: [assignedToDn]) inaccessibleServer [chassisId]/[slotId] (service profile: [assignedToDn]) inaccessible
This fault typically occurs when the server, which is associated with a service profile, has lost connection to the fabric interconnects. This fault occurs if there are communication issues between the server CIMC and the fabric interconnects.
If you see this fault, take the following actions:
Step 1 Wait a few minutes to see if the fault clears. This is typically a temporary issue, and can occur after a firmware upgrade.
Step 2 If the fault does not clear after a brief time, remove the server and then reinsert it.
Step 3 Reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerFailed
Server has failed. This fault typically occurs if the adapter power on self-test results in major and critical errors.
If you see this fault, take the following actions:
Step 1 Check the POST results for the server. In Cisco FPR Manager GUI, you can access the POST results from the General tab for the server. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the server.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerDiscoveryFailed
Service profile [name] discovery failed
The shallow discovery that occurs when the server associated with service profile fails. If the server is up and the data path is working, this fault typically occurs as a result of one of the following issues:
- Cisco FPR Manager cannot communicate with the CIMC on the server.
- The server cannot communicate with the fabric interconnect.
If you see this fault, take the following actions:
Step 1 Check the FSM tab and view the current state of the server and any FSM operations.
Step 2 Check the error descriptions and see if any server components indicate a failure.
Step 3 If the server or a server component has failed, do the following:
a. Check the operational state of the server.
b. If the server is not operable, reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerConfigFailure
Service profile [name] configuration failed due to [configQualifier]
The named configuration qualifier is not available. This fault typically occurs because Cisco FPR Manager cannot successfully deploy the service profile due to a lack of resources that meet the named qualifier. For example, this fault can occur if the following occurs:
- The service profile is configured for a server adapter with vHBAs, and the adapter on the server does not support vHBAs.
- The service profile is created from a template which includes a server pool, and the server pool is empty.
- The local disk configuration policy in the service profile specifies the No Local Storage mode, but the server contains local disks.
If you see this fault, take the following actions:
Step 1 Check the status of the server pool associated with the service profile. If the pool is empty, add more blade servers to it.
Step 2 Check the state of the server and ensure that it is in either the discovered or unassociated state.
Step 3 If the server is associated or undiscovered, do one of the following:
– Disassociate the server from the current service profile.
– Select another server to associate with the service profile.
Step 4 Review each policy in the service profile and verify that the selected server meets the requirements in the policy.
Step 5 If the server does not meet the requirements of the service profile, do one of the following:
– Modify the service profile to match the server.
– Select another server that does meet the requirements to associate with the service profile.
Step 6 If you can verify that the server meets the requirements of the service profile, create a show tech-support file and contact Cisco TAC.
fltLsServerMaintenanceFailed
Service profile [name] maintenance failed
Cisco FPR Manager currently does not use this fault.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltLsServerRemoved
Service profile [name] underlying resource removed
Cisco FPR Manager cannot access the server associated with the service profile. This fault typically occurs as a result of one of the following issues:
If you see this fault, take the following actions:
Step 1 If the server was removed from the slot, reinsert the server in the slot.
Step 2 If the server was not removed, remove and reinsert the server.NOTE: If the server is operable, this action can be disruptive to current operations.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerInaccessible
Service profile [name] cannot be accessed
Cisco FPR Manager cannot communicate with the CIMC on the server. This fault typically occurs as a result of one of the following issues:
If you see this fault, take the following actions:
Step 1 If Cisco FPR Manager shows that the CIMC is down, physically reseat the server.
Step 2 If Cisco FPR Manager shows that the server ports have failed, attempt to enable them.
Step 3 If the I/O module is offline, check for faults on that component.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerAssociationFailed
Service profile [name] association failed for [pnDn]
The service profile could not be associated with the server. This fault typically occurs because Cisco FPR Manager cannot communicate with one or more of the following:
If you see this fault, take the following actions:
Step 1 Check the FSM tab for the server and service profile to determine why the association failed.
Step 2 If the server is stuck in an inappropriate state, such as booting, power cycle the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerUnassociated
Service profile [name] is not associated
The service profile has not yet been associated with a server or a server pool. This fault typically occurs as a result of one of the following issues:
If you see this fault, take the following actions:
Step 1 If you did not intend to associate the service profile, ignore the fault.
Step 2 If you did intend to associate the service profile, check the association failure fault.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerServer-unfulfilled
Server [pnDn] does not fulfill Service profile [name] due to [configQualifier]
The server no longer meets the qualification requirements of the service profile. This fault typically occurs as a result of one of the following issues:
If you see this fault, take the following actions:
Step 1 Check the server inventory compare to the service profile qualifications.
Step 2 If the server inventory does not match the service profile qualifications, do one of the following:
– Associate the server with a different service profile.
– Ensure the server has sufficient resources to qualify for the current service profile.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEtherSwitchIntFIoSatellite-connection-absent
No link between IOM port [chassisId]/[slotId]/[portId] and fabric interconnect [switchId]:[peerSlotId]/[peerPortId]
This fault is raised when an I/O module fabric port, which links the I/O module port and the fabric interconnect, is not functional
If you see this fault, take the following actions:
Step 1 Verify the fabric interconnect-chassis topology. Make sure each I/O module is connected to only one fabric interconnect.
Step 2 Ensure that the fabric interconnect server port is configured and enabled.
Step 3 Ensure that the links are plugged in properly and reacknowledge the chassis.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEtherSwitchIntFIoSatellite-wiring-problem
Invalid connection between IOM port [chassisId]/[slotId]/[portId] and fabric interconnect [switchId]:[peerSlotId]/[peerPortId]
This fault typically occurs as a result of a satellite wiring problem on the network-facing interface of an I/O module and Cisco FPR Manager detects that at least one IOM uplink is misconnected to one of the fabric interconnect ports.
If you see this fault, take the following actions:
Step 1 Verify the fabric interconnect-chassis topology. Make sure each I/O module is connected to only one fabric interconnect.
Step 2 Ensure that the links are plugged in properly and re-acknowledge the chassis.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuPowerSupplyProblem
Power supply [id] in chassis [id] power: [power]Power supply [id] in fabric interconnect [id] power: [power]Power supply [id] in fex [id] power: [power]Power supply [id] in server [id] power: [power]
This fault typically occurs when Cisco FPR Manager detects a problem with a power supply unit in a chassis, fabric interconnect or a FEX. For example, the PSU is not functional.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed in the chassis or fabric interconnect.
Step 4 Remove the PSU and reinstall it.
Step 6 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanDegraded
Fan [id] in Fan Module [tray]-[id] under chassis [id] operability: [operability]Fan [id] in fabric interconnect [id] operability: [operability]Fan [id] in fex [id] operability: [operability]Fan [id] in Fan Module [tray]-[id] under server [id] operability: [operability]
This fault occurs when one or more fans in a fan module are not operational, but at least one fan is operational.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the fan module.
Step 2 Review the Cisco FPR Site Preparation Guide and ensure the fan module has adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty fan modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanInoperable
Fan [id] in Fan Module [tray]-[id] under chassis [id] operability: [operability]Fan [id] in fabric interconnect [id] operability: [operability]Fan [id] in fex [id] operability: [operability]Fan [id] in Fan Module [tray]-[id] under server [id] operability: [operability]
This fault occurs if a fan is not operational.
If you see this fault, take the following actions:
Step 1 Remove fan module and re-install the fan module again. Remove only one fan module at a time.
Step 2 Replace fan module with a different fan module
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuInoperable
Power supply [id] in chassis [id] operability: [operability]Power supply [id] in fabric interconnect [id] operability: [operability]Power supply [id] in fex [id] operability: [operability]Power supply [id] in server [id] operability: [operability]
This fault typically occurs when Cisco FPR Manager detects a problem with a power supply unit in a chassis, fabric interconnect or a FEX. For example, the PSU is not functional.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed in the chassis or fabric interconnect.
Step 4 Remove the PSU and reinstall it.
Step 6 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardRemoved
[side] IOM [chassisId]/[id] ([switchId]) is removed
This fault typically occurs because an I/O module is removed from the chassis. In a cluster configuration, the chassis fails over to the other I/O module. For a standalone configuration, the chassis associated with the I/O module loses network connectivity. This is a critical fault because it can result in the loss of network connectivity and disrupt data traffic through the I/O module.
If you see this fault, take the following actions:
Step 1 Reinsert the I/O module and configure the fabric-interconnect ports connected to it as server ports and wait a few minutes to see if the fault clears.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleMissing
Fan module [tray]-[id] in chassis [id] presence: [presence]Fan module [tray]-[id] in server [id] presence: [presence]Fan module [tray]-[id] in fabric interconnect [id] presence: [presence]
This fault occurs if a fan Module slot is not equipped or removed from its slot
If you see this fault, take the following actions:
Step 1 If the reported slot is empty, insert a fan module into the slot.
Step 2 If the reported slot contains a fan module, remove and reinsert the fan module.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuMissing
Power supply [id] in chassis [id] presence: [presence]Power supply [id] in fabric interconnect [id] presence: [presence]Power supply [id] in fex [id] presence: [presence]Power supply [id] in server [id] presence: [presence]
This fault typically occurs when Cisco FPR Manager detects a problem with a power supply unit in a chassis, fabric interconnect, or a FEX. For example, the PSU is missing.
If you see this fault, take the following actions:
Step 1 If the PSU is physically present in the slot, remove and then reinsert it.
Step 2 If the PSU is not physically present in the slot, insert a new PSU.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardThermalProblem
[side] IOM [chassisId]/[id] ([switchId]) operState: [operState]
This fault occurs when there is a thermal problem on an I/O module. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleThermalThresholdNonCritical
Fan module [tray]-[id] in chassis [id] temperature: [thermal]Fan module [tray]-[id] in server [id] temperature: [thermal]Fan module [tray]-[id] in fabric interconnect [id] temperature: [thermal]
This fault occurs when the temperature of a fan module has exceeded a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the fan module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the fan modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty fan modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuThermalThresholdNonCritical
Power supply [id] in chassis [id] temperature: [thermal]Power supply [id] in fabric interconnect [id] temperature: [thermal]Power supply [id] in server [id] temperature: [thermal]
This fault occurs when the temperature of a PSU module has exceeded a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the PSU module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the PSU modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty PSU modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleThermalThresholdCritical
Fan module [tray]-[id] in chassis [id] temperature: [thermal]Fan module [tray]-[id] in server [id] temperature: [thermal]Fan module [tray]-[id] in fabric interconnect [id] temperature: [thermal]
This fault occurs when the temperature of a fan module has exceeded a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the fan module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the fan modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty fan modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuThermalThresholdCritical
Power supply [id] in chassis [id] temperature: [thermal]Power supply [id] in fabric interconnect [id] temperature: [thermal]Power supply [id] in server [id] temperature: [thermal]
This fault occurs when the temperature of a PSU module has exceeded a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the PSU module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the PSU modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty PSU modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleThermalThresholdNonRecoverable
Fan module [tray]-[id] in chassis [id] temperature: [thermal]Fan module [tray]-[id] in server [id] temperature: [thermal]Fan module [tray]-[id] in fabric interconnect [id] temperature: [thermal]
This fault occurs when the temperature of a fan module has been out of operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the fan module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the fan modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty fan modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuThermalThresholdNonRecoverable
Power supply [id] in chassis [id] temperature: [thermal]Power supply [id] in fabric interconnect [id] temperature: [thermal]Power supply [id] in server [id] temperature: [thermal]
This fault occurs when the temperature of a PSU module has been out of operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the PSU module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the PSU modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty PSU modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuVoltageThresholdNonCritical
Power supply [id] in chassis [id] voltage: [voltage]Power supply [id] in fabric interconnect [id] voltage: [voltage]Power supply [id] in fex [id] voltage: [voltage]Power supply [id] in server [id] voltage: [voltage]
This fault occurs when the PSU voltage is out of normal operating range, but hasn’t reached to a critical stage yet. Normally the PSU will recover itself from this situation.
If you see this fault, take the following actions:
Step 1 Monitor the PSU for further degradation.
Step 2 Remove and reseat the PSU.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuVoltageThresholdCritical
Power supply [id] in chassis [id] voltage: [voltage]Power supply [id] in fabric interconnect [id] voltage: [voltage]Power supply [id] in fex [id] voltage: [voltage]Power supply [id] in server [id] voltage: [voltage]
This fault occurs when the PSU voltage has exceeded the specified hardware voltage rating.
If you see this fault, take the following actions:
Step 1 Remove and reseat the PSU.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuVoltageThresholdNonRecoverable
Power supply [id] in chassis [id] voltage: [voltage]Power supply [id] in fabric interconnect [id] voltage: [voltage]Power supply [id] in fex [id] voltage: [voltage]Power supply [id] in server [id] voltage: [voltage]
This fault occurs when the PSU voltage has exceeded the specified hardware voltage rating and PSU hardware may have been damaged as a result or may be at risk of being damaged.
If you see this fault, take the following actions:
Step 1 Remove and reseat the PSU.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuPerfThresholdNonCritical
Power supply [id] in chassis [id] output power: [perf]Power supply [id] in fabric interconnect [id] output power: [perf]Power supply [id] in server [id] output power: [perf]
This fault is raised as a warning if the current output of the PSU in a chassis, fabric interconnect, or rack server does not match the desired output value.
If you see this fault, take the following actions:
Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
Step 3 If the above action did not resolve the issue, create a show tech-support file for the chassis and Cisco FPR Manager, and contact Cisco TAC.
fltEquipmentPsuPerfThresholdCritical
Power supply [id] in chassis [id] output power: [perf]Power supply [id] in fabric interconnect [id] output power: [perf]Power supply [id] in server [id] output power: [perf]
This fault occurs if the current output of the PSU in a chassis, fabric interconnect, or rack server is far below or above the desired output value.
If you see this fault, take the following actions:
Step 1 Monitor the PSU status.
Step 2 Plan to replace the PSU as soon as possible.
Step 3 If the above actions did not resolve the issue, create a show tech-support file for the chassis and Cisco FPR Manager, and contact Cisco TAC.
fltEquipmentPsuPerfThresholdNonRecoverable
Power supply [id] in chassis [id] output power: [perf]Power supply [id] in fabric interconnect [id] output power: [perf]Power supply [id] in server [id] output power: [perf]
This fault occurs if the current output of the PSU in a chassis, fabric interconnect, or rack server is far above or below the non-recoverable threshold value.
If you see this fault, plan to replace the PSU as soon as possible.
fltEquipmentFanPerfThresholdNonCritical
Fan [id] in Fan Module [tray]-[id] under chassis [id] speed: [perf]Fan [id] in fabric interconnect [id] speed: [perf]Fan [id] in Fan Module [tray]-[id] under server [id] speed: [perf]
This fault occurs when the fan speed reading from the fan controller does not match the desired fan speed and is outside of the normal operating range. This can indicate a problem with a fan or with the reading from the fan controller.
If you see this fault, take the following actions:
Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time or if other fans do not show the same problem, reseat the fan.
Step 3 Replace the fan module.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanPerfThresholdCritical
Fan [id] in Fan Module [tray]-[id] under chassis [id] speed: [perf]Fan [id] in fabric interconnect [id] speed: [perf]Fan [id] in Fan Module [tray]-[id] under server [id] speed: [perf]
This fault occurs when the fan speed read from the fan controller does not match the desired fan speed and has exceeded the critical threshold and is in risk of failure. This can indicate a problem with a fan or with the reading from the fan controller.
If you see this fault, take the following actions:
Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time or if other fans do not show the same problem, reseat the fan.
Step 3 If the above actions did not resolve the issue, create a show tech-support file for the chassis and contact Cisco TAC.
fltEquipmentFanPerfThresholdNonRecoverable
Fan [id] in Fan Module [tray]-[id] under chassis [id] speed: [perf]Fan [id] in fabric interconnect [id] speed: [perf]Fan [id] in Fan Module [tray]-[id] under server [id] speed: [perf]
This fault occurs when the fan speed read from the fan controller has far exceeded the desired fan speed. It frequently indicates that the fan has failed.
If you see this fault, take the following actions:
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardFirmwareUpgrade
Chassis controller in IOM [chassisId]/[id] ([switchId]) firmware upgrade problem: [upgradeStatus]
This fault typically occurs when an IOM upgrade fails.
If you see this fault, take the following actions:
Step 1 On the FSM tab for the IOM, verify whether FSM for the upgrade completed successfully or failed.
Step 2 If the FSM failed, review the error message in the FSM.
Step 3 If the error message is self explanatory, verify the physical connectivity. For example, an error message could be No Connection to Endpoint or Link Down.
Step 4 If the above action did not resolve the issue and the fault persists, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisUnsupportedConnectivity
Current connectivity for chassis [id] does not match discovery policy: [configState]
This fault typically occurs when the current connectivity for a chassis does not match the configuration in the chassis discovery policy.
If you see this fault, take the following actions:
Step 1 Verify that the correct number of links are configured in the chassis discovery policy.
Step 2 Check the state of the I/O module links.
Step 3 Reacknowledge the chassis.
Step 4 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisUnacknowledged
Chassis [id] connectivity configuration: [configState]
This fault typically occurs when or more of the I/O module links from the chassis are unacknowledged.
If you see this fault, take the following actions:
Step 1 Check the state of the I/O module links.
Step 2 Reacknowledge the chassis.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardUnsupportedConnectivity
IOM [chassisId]/[id] ([switchId]) current connectivity does not match discovery policy or connectivity is unsupported: [configState]
This fault typically occurs when the current connectivity for an I/O module does not match the configuration in the chassis discovery policy.
If you see this fault, take the following actions:
Step 1 Verify that the correct number of links are configured in the chassis discovery policy.
Step 2 Check the state of the I/O module links.
Step 3 Note that atleast 2 links are required to be connected between FEX and 61xx Fabric Interconnect
Step 4 Reacknowledge the chassis.
Step 5 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardUnacknowledged
IOM [chassisId]/[id] ([switchId]) connectivity configuration: [configState]
This fault typically occurs when an I/O module is unacknowledged.
If you see this fault, take the following actions:
Step 1 Check the state of the I/O module links.
Step 2 Reacknowledge the chassis.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardPeerDisconnected
IOM [chassisId]/[id] ([switchId]) peer connectivity: [peerCommStatus]
This fault typically occurs when an I/O module is unable to communicate with its peer I/O module.
If you see this fault, take the following actions:
Step 1 Wait a few minutes to see if the fault clears. This is typically a temporary issue, and can occur after a firmware upgrade.
Step 2 If the fault does not clear after a few minutes, remove and reinsert the I/O module.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisIdentity
Chassis [id] has a mismatch between FRU identity reported by Fabric/IOM vs. FRU identity reported by CMC
This fault typically occurs when the FRU information for an I/O module is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardIdentity
[side] IOM [chassisId]/[id] ([switchId]) has a malformed FRU
This fault typically occurs when the FRU information for an I/O module is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleIdentity
Fan Module [tray]-[id] in chassis [id] has a malformed FRUFan Module [tray]-[id] in server [id] has a malformed FRUFan Module [tray]-[id] in fabric interconnect [id] has a malformed FRU
This fault typically occurs when the FRU information for a fan module is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuIdentity
Power supply [id] on chassis [id] has a malformed FRUPower supply [id] on server [id] has a malformed FRU
This fault typically occurs when the FRU information for a power supply unit is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisPowerProblem
Power state on chassis [id] is [power]
This fault typically occurs when the chassis fails to meet the minimal power requirements defined in the power policy or when one or more power supplies have failed.
If you see this fault, take the following actions:
Step 1 In Cisco FPR Manager, verify that all PSUs for the chassis are functional.
Step 2 Verify that all PSUs are seated properly within the chassis and are powered on.
Step 3 Physically unplug and replug the power cord into the chassis.
Step 4 If all PSUs are operating at maximum capacity, either add more PSUs to the chassis or redefine the power policy in Cisco FPR Manager.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisThermalThresholdCritical
Thermal condition on chassis [id]. [thermalStateQualifier]
This fault occurs under the following conditions:
Step 1 If a component within a chassis is operating outside the safe thermal operating range.
Step 2 If the chassis controller in the supervisor is unable to determine the thermal condition of a blade server, the show tech-support file for the chassis provides a more detailed report of the most severe thermal conditions currently applicable for that chassis.
If you see this fault, take the following actions:
Step 1 Check the temperature readings for the blade servers and supervisor and ensure they are within the recommended thermal safe operating range.
Step 2 If the fault reports a "Thermal Sensor threshold crossing in blade" error for one or more blade servers, check if DIMM or processor temperature related faults have been raised against that blade.
Step 3 If the fault reports a "Thermal Sensor threshold crossing in supervisor" error for supervisor, check if thermal faults have been raised against that supervisor. Those faults include details of the thermal condition.
Step 4 If the fault reports a "Missing or Faulty Fan" error, check on the status of that fan. If it needs replacement, create a show tech-support file for the chassis and contact Cisco TAC.
Step 5 If the fault reports a "No connectivity between supervisor and blade" or "Thermal Sensor readings unavailable from blade" error, check if that blade server is operational and whether any faults have been raised against that blade server. In this situation, the chassis controller may go into a fail-safe operating mode and the fan speeds may increase as a precautionary measure.
Step 6 If the above actions did not resolve the issue and the condition persists, create a show tech-support file for Cisco FPR Manager and the chassis and contact Cisco TAC.
fltEquipmentChassisThermalThresholdNonRecoverable
Thermal condition on chassis [id]. [thermalStateQualifier]
FPRM raises this fault under the following conditions:
Step 1 If a component within a chassis is operating outside the safe thermal operating range.
Step 2 If the chassis controller in the supervisor is unable to determine the thermal condition of a blade server, the show tech-support file for the chassis provides a more detailed report of the most severe thermal conditions currently applicable for that chassis.
If you see this fault, take the following actions:
Step 1 Check the temperature readings for the blade servers and supervisor and ensure they are within the recommended thermal safe operating range.
Step 2 If the fault reports a "Thermal Sensor threshold crossing in blade" error for one or more blade servers, check if DIMM or processor temperature related faults have been raised against that blade.
Step 3 If the fault reports a "Thermal Sensor threshold crossing in supervisor" error for supervisor, check if thermal faults have been raised against that supervisor. Those faults include details of the thermal condition.
Step 4 If the fault reports a "Missing or Faulty Fan" error, check on the status of that fan. If it needs replacement, create a show tech-support file for the chassis and contact Cisco TAC.
Step 5 If the fault reports a "No connectivity between supervisor and blade" or "Thermal Sensor readings unavailable from blade" error, check if that blade server is operational and whether any faults have been raised against that blade server. In this situation, the chassis controller may go into a fail-safe operating mode and the fan speeds may increase as a precautionary measure.
Step 6 If the above actions did not resolve the issue and the condition persists, create a show tech-support file for Cisco FPR Manager and the chassis and contact Cisco TAC.
fltComputeBoardCmosVoltageThresholdCritical
Possible loss of CMOS settings: CMOS battery voltage on server [chassisId]/[slotId] is [cmosVoltage]Possible loss of CMOS settings: CMOS battery voltage on server [id] is [cmosVoltage]
This fault is raised when the CMOS battery voltage has dropped to lower than the normal operating range. This could impact the clock and other CMOS settings.
If you see this fault, replace the battery.
fltComputeBoardCmosVoltageThresholdNonRecoverable
Possible loss of CMOS settings: CMOS battery voltage on server [chassisId]/[slotId] is [cmosVoltage]Possible loss of CMOS settings: CMOS battery voltage on server [id] is [cmosVoltage]
This fault is raised when the CMOS battery voltage has dropped quite low and is unlikely to recover. This impacts the clock and other CMOS settings.
If you see this fault, replace the battery.
fltMgmtEntityElection-failure
Fabric Interconnect [id], election of primary managemt instance has failed
This fault occurs in an unlikely event that the fabric interconnects in a cluster configuration could not reach an agreement for selecting the primary fabric interconnect. This impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that the initial setup configuration is correct on both fabric interconnects.
Step 2 Verify that the L1 and L2 links are properly connected between the fabric interconnects.
Step 3 In the Cisco FPR Manager CLI, run the cluster force primary local-mgmt command on one fabric interconnect.
Step 4 Reboot the fabric interconnects.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityHa-not-ready
Fabric Interconnect [id], HA functionality not ready
This fault occurs if Cisco FPR Manager cannot discover or communicate with one or more chassis or rack servers to write the HA Cluster state. This impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that the initial setup configuration is correct on both fabric interconnects.
Step 2 Verify that the L1 and L2 links are properly connected between the fabric interconnects.
Step 3 Verify that the IOMs and/or FEXes are reachable and the server ports are enabled and operationally up.
Step 4 Verify that the chassis and/or rack servers are powered up and reachable
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityVersion-incompatible
Fabric Interconnect [id], management services, incompatible versions
This fault occurs if the Cisco FPR Manager software on the subordinate fabric interconnect is not the same release as that of the primary fabric interconnect. This impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Upgrade the Cisco FPR Manager software on the subordinate fabric interconnect to the same release as the primary fabric interconnect and verify that both fabric interconnects are running the same release of Cisco FPR Manager.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanMissing
Fan [id] in fabric interconnect [id] presence: [presence]Fan [id] in fex [id] presence: [presence]Fan [id] in Fan Module [tray]-[id] under server [id] presence: [presence]
This fault occurs in the unlikely event that a fan in a fan module cannot be detected.
If you see this fault, take the following actions:
Step 1 Insert/reinsert the fan module in the slot that is reporting the issue.
Step 2 Replace the fan module with a different fan module, if available.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardAutoUpgradingFirmware
IOM [chassisId]/[id] ([switchId]) is auto upgrading firmware
This fault typically occurs when an I/O module is auto upgrading. Auto-upgrade occurs when the firmware version on the IOM is incompatible with the firmware version on the fabric interconnect.
If you see this fault, take the following actions:
Step 1 If the IOM and fabric interconnects are not running the same firmware version, wait for the auto-upgrade to complete.
Step 2 When the IOM upgrade is completed, verify that Cisco FPR Manager has cleared this fault.
Step 3 If you see this fault after the IOM overall status changes to operable, create a show tech-support file and contact Cisco TAC.
fltFirmwarePackItemImageMissing
[type] image with vendor [hwVendor], model [hwModel] and version [version] is deleted
This fault typically occurs when the image to which a firmware package item refers is missing.
If you see this fault, take the following actions:
Step 1 In Cisco FPR Manager GUI, navigate to the Firmware Management Images tab and determine whether the missing image is available or not.
Step 2 If the image is present, click on it to verify the model and vendor.
Step 3 If the image for the required model and vendor is not present, download that image or bundle from the Cisco.com website.
Step 4 If the image is present and the fault persists, create a show tech-support file and contact Cisco TAC.
fltEtherSwitchIntFIoSatellite-wiring-numbers-unexpected
Chassis discovery policy conflict: Link IOM [chassisId]/[slotId]/[portId] to fabric interconnect [switchId]:[peerSlotId]/[peerPortId] not configured
The configuration of the chassis discovery policy conflicts with the physical IOM uplinks. Cisco FPR Manager raises this fault when the chassis discovery policy is configured for more links than are physically cabled between the IOM uplinks on the chassis and the fabric interconnect.
If you see this fault, take the following actions:
Step 1 Ensure that you cable at least the same number of IOM uplinks as are configured in the chassis discovery policy, and that you configure the corresponding server ports on the fabric interconnect.
Step 2 Reacknowledge the chassis.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityManagement-services-failure
Fabric Interconnect [id], management services have failed
This fault occurs in an unlikely event that management services fail on a fabric interconnect. This impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that the initial setup configuration is correct on both fabric interconnects.
Step 2 Verify that the L1 and L2 links are properly connected between the fabric interconnects.
Step 3 Reboot the fabric interconnects.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityManagement-services-unresponsive
Fabric Interconnect [id], management services are unresponsive
This fault occurs when management services on a fabric interconnect are unresponsive. This impacts the full HA functionality of the fabric interconnect cluster.
If you see this fault, take the following actions:
Step 1 Verify that the initial setup configuration is correct on both fabric interconnects.
Step 2 Verify that the L1 and L2 links are properly connected between the fabric interconnects.
Step 3 Reboot the fabric interconnects.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisInoperable
Chassis [id] operability: [operability]
This fault typically occurs for one of the following reasons:
- The fabric interconnect cannot communicate with a chassis. For a cluster configuration, this fault means that neither fabric interconnect can communicate with the chassis.
- The chassis has an invalid FRU.
If you see this fault, take the following actions:
Step 1 In Cisco FPR Manager, reacknowledge the chassis that raised the fault.
Step 2 Physically unplug and replug the power cord into the chassis.
Step 3 Verify that the I/O modules are functional.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEtherServerIntFIoHardware-failure
IOM [transport] interface [portId] on chassis [id] oper state: [operState], reason: [stateQual]Fabric Interconnect [transport] interface [portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual]IOM [transport] interface [portId] on fex [id] oper state: [operState], reason: [stateQual]
This fault is raised on the IOM/FEX backplane ports when Cisco FPR Manager detects a hardware failure.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltDcxVcMgmt-vif-down
IOM [chassisId] / [slotId] ([switchId]) management VIF [id] down, reason [stateQual]
This fault occurs when the transport VIF for an I/O module is down. Cisco FPR Manager raises this fault when a fabric interconnect reports the connectivity state on virtual interface as one of the following:
If you see this fault, take the following actions:
Step 1 Verify that the chassis discovery has gone through successfully. Check the states on all communicating ports from end to end.
Step 2 If connectivity seems correct, decommission and recommission the chassis.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltSysdebugMEpLogMEpLogLog
Log capacity on [side] IOM [chassisId]/[id] is [capacity]Log capacity on Management Controller on server [chassisId]/[slotId] is [capacity]Log capacity on Management Controller on server [id] is [capacity]
This fault typically occurs because Cisco FPR Manager has detected that the system event log (SEL) on the server is approaching full capacity. The available capacity in the log is low. This is an info-level fault and can be ignored if you do not want to clear the SEL at this time.
If you see this fault, you can clear the SEL in Cisco FPR Manager if desired.
fltSysdebugMEpLogMEpLogVeryLow
Log capacity on [side] IOM [chassisId]/[id] is [capacity]Log capacity on Management Controller on server [chassisId]/[slotId] is [capacity]Log capacity on Management Controller on server [id] is [capacity]
This fault typically occurs because Cisco FPR Manager has detected that the system event log (SEL) on the server is almost full. The available capacity in the log is very low. This is an info-level fault and can be ignored if you do not want to clear the SEL at this time.
If you see this fault, you can clear the SEL in Cisco FPR Manager if desired.
fltSysdebugMEpLogMEpLogFull
Log capacity on [side] IOM [chassisId]/[id] is [capacity]Log capacity on Management Controller on server [chassisId]/[slotId] is [capacity]Log capacity on Management Controller on server [id] is [capacity]
This fault typically occurs because Cisco FPR Manager could not transfer the SEL file to the location specified in the SEL policy. This is an info-level fault and can be ignored if you do not want to clear the SEL at this time.
If you see this fault, take the following actions:
Step 1 Verify the configuration of the SEL policy to ensure that the location, user, and password provided are correct.
Step 2 If you do want to transfer and clear the SEL and the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePoolEmpty
This fault typically occurs when the selected server pool does not contain any servers.
If you see this fault, take the following actions:
Step 1 Verify the qualifier settings in the server pool policy qualifications. If the policy was modified after the server was discovered, reacknowledge the server.
Step 2 Manually associate the service profile with a server.
Step 3 If the server pool is not used, ignore the fault.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltUuidpoolPoolEmpty
UUID suffix pool [name] is empty
This fault typically occurs when a UUID suffix pool does not contain any UUID suffixes.
If you see this fault, take the following actions:
Step 1 If the pool is in use, add a block of UUID suffixes to the pool.
Step 2 If the pool is not in use, ignore the fault.
fltIppoolPoolEmpty
This fault typically occurs when an IP address pool does not contain any IP addresses.
If you see this fault, take the following actions:
Step 1 If the pool is in use, add a block of IP addresses to the pool.
Step 2 If the pool is not in use, ignore the fault.
fltMacpoolPoolEmpty
This fault typically occurs when a MAC address pool does not contain any MAC addresses.
If you see this fault, take the following actions:
Step 1 If the pool is in use, add a block of MAC addresses to the pool.
Step 2 If the pool is not in use, ignore the fault.
fltFirmwareUpdatableImageUnusable
backup image is unusable. reason: [operStateQual]
This fault typically occurs when the backup firmware image on an endpoint is unusable.
If you see this fault, take the following actions:
Step 1 Review the fault and the error message on the FSM tab for the endpoint to determine why the firmware image is unusable.
Step 2 If the firmware image is bad or corrupted, download another copy from the Cisco website and update the backup version on the endpoint with the new image.
Step 3 If the image is present and the fault persists, create a show tech-support file and contact Cisco TAC.
fltFirmwareBootUnitCantBoot
unable to boot the startup image. End point booted with backup image
This fault typically occurs when the startup firmware image on an endpoint is corrupted or invalid, and the endpoint cannot boot from that image.
If you see this fault, take the following actions:
Step 1 Review the fault and the error message on the FSM tab for the endpoint to determine why the firmware image is unusable. The error message usually includes an explanation for why the endpoint could not boot from the startup image, such as Bad-Image or Checksum Failed.
Step 2 If the firmware image is bad or corrupted, download another copy from the Cisco website and update the startup version on the endpoint with the new image.
Step 3 If the fault persists, create a show tech-support file and contact Cisco TAC.
fltFcpoolInitiatorsEmpty
FC pool [purpose] [name] is empty
This fault typically occurs when a WWN pool does not contain any WWNs.
If you see this fault, take the following actions:
Step 1 If the pool is in use, add a block of WWNs to the pool.
Step 2 If the pool is not in use, ignore the fault.
fltEquipmentIOCardInaccessible
[side] IOM [chassisId]/[id] ([switchId]) is inaccessible
This fault typically occurs because an I/O module has lost its connection to the fabric interconnects. In a cluster configuration, the chassis fails over to the other I/O module. For a standalone configuration, the chassis associated with the I/O module loses network connectivity. This is a critical fault because it can result in the loss of network connectivity and disrupt data traffic through the I/O module.
If you see this fault, take the following actions:
Step 1 Wait a few minutes to see if the fault clears. This is typically a temporary issue, and can occur after a firmware upgrade.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltDcxVIfLinkState
Virtual interface [id] link state is down
This fault occurs when Cisco FPR cannot send or receive data through an uplink port.
If you see this fault, take the following actions:
Step 1 Reenable the uplink port that failed.
Step 2 Check the associated port to ensure it is in up state.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleDegraded
Fan module [tray]-[id] in chassis [id] operability: [operability]Fan module [tray]-[id] in server [id] operability: [operability]Fan module [tray]-[id] in fabric interconnect [id] operability: [operability]
This fault occurs when a fan module is not operational.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the fan module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the fan module has adequate airflow, including front and back clearance.
Step 3 Verify that the air flows for the fan module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardPost-failure
[side] IOM [chassisId]/[id] ([switchId]) POST failure
This fault typically occurs when an I/O module encounters errors during the Power On Self Test (POST). The impact of this fault varies according to the errors that were encountered during POST.
If you see this fault, take the following actions:
Step 1 Check the POST results for the I/O module. In Cisco FPR Manager GUI, you can access the POST results from the General tab for the I/O module. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the I/O module.
Step 2 If the POST results indicate FRU error, check if FPR manager has raised fault for the FRU and follow recommended action for the fault.
Step 3 Otherwise, reboot the I/O module.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanPerfThresholdLowerNonRecoverable
Fan [id] in Fan Module [tray]-[id] under chassis [id] speed: [perf]Fan [id] in fabric interconnect [id] speed: [perf]Fan [id] in Fan Module [tray]-[id] under server [id] speed: [perf]
This fault occurs when the fan speed reading from the fan controller is far below the desired fan speed, and the fan has likely failed.
If you see this fault, create a detailed show tech-support file for the chassis and replace the fan module. If necessary, contact Cisco TAC.
fltComputePhysicalPost-failure
Server [id] POST or diagnostic failureServer [chassisId]/[slotId] POST or diagnostic failure
This fault typically occurs when the server has encountered a diagnostic failure or an error during POST.
If you see this fault, take the following actions:
Step 1 Check the POST results for the server. In Cisco FPR Manager GUI, you can access the POST results from the General tab for the server. In Cisco FPR Manager CLI, you can access the POST results through the show post command under the scope for the server.
Step 3 If the above actions did not resolve the issue, execute the show tech-support command and contact Cisco Technical Support.
fltEquipmentPsuOffline
Power supply [id] in chassis [id] power: [power]Power supply [id] in fabric interconnect [id] power: [power]Power supply [id] in fex [id] power: [power]Power supply [id] in server [id] power: [power]
This fault typically occurs when Cisco FPR Manager detects that a power supply unit in a chassis, fabric interconnect, or FEX is offline.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed in the chassis or fabric interconnect.
Step 4 Remove the PSU and reinstall it.
Step 6 If the above actions did not resolve the issue, note down the type of PSU, execute the show tech-support command, and contact Cisco Technical Support.
fltStorageRaidBatteryInoperable
RAID Battery on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]RAID Battery on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the RAID backup unit is not operational.
If you see this fault, take the following actions:
Step 1 If the backup unit is a battery, replace the battery.
Step 2 If the backup unit is a supercapacitor type and the supercapacitor is missing, verify its presence and supply if missing.
Step 3 If the backup unit is a supercapacitor type and the TFM is missing, verify its presence and supply if missing.
Step 4 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltSysdebugMEpLogTransferError
Server [chassisId]/[slotId] [type] transfer failed: [operState]Server [id] [type] transfer failed: [operState]
This fault occurs when the transfer of a managed endpoint log file, such as the SEL, fails.
If you see this fault, take the following actions:
Step 1 If the fault is related to the SEL, verify the connectivity to the CIMC on the server.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeRtcBatteryInoperable
RTC Battery on server [chassisId]/[slotId] operability: [operability]
This fault is raised when the CMOS battery voltage is below the normal operating range. This impacts the system clock.
If you see this fault, replace the CMOS battery.
fltMemoryBufferUnitThermalThresholdNonCritical
Buffer Unit [id] on server [chassisId]/[slotId] temperature: [thermal]Buffer Unit [id] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory buffer unit on a blade or rack server exceeds a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryBufferUnitThermalThresholdCritical
Buffer Unit [id] on server [chassisId]/[slotId] temperature: [thermal]Buffer Unit [id] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory buffer unit on a blade or rack server exceeds a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryBufferUnitThermalThresholdNonRecoverable
Buffer Unit [id] on server [chassisId]/[slotId] temperature: [thermal]Buffer Unit [id] on server [id] temperature: [thermal]
This fault occurs when the temperature of a memory buffer unit on a blade or rack server has been out of the operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeIOHubThermalNonCritical
IO Hub on server [chassisId]/[slotId] temperature: [thermal]
This fault is raised when the IO controller temperature is outside the upper or lower non-critical threshold.
If you see this fault, monitor other environmental events related to this server and ensure the temperature ranges are within recommended ranges.
fltComputeIOHubThermalThresholdCritical
IO Hub on server [chassisId]/[slotId] temperature: [thermal]
This fault is raised when the IO controller temperature is outside the upper or lower critical threshold.
If you see this fault, take the following actions:
Step 1 Monitor other environmental events related to the server and ensure the temperature ranges are within recommended ranges.
Step 2 Consider turning off the server for a while if possible.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeIOHubThermalThresholdNonRecoverable
IO Hub on server [chassisId]/[slotId] temperature: [thermal]
This fault is raised when the IO controller temperature is outside the recoverable range of operation.
If you see this fault, take the following actions:
Step 1 Shutdown the server immediately.
Step 2 Create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisIdentity-unestablishable
Chassis [id] has an invalid FRU
This fault typically occurs because Cisco FPR Manager has detected an unsupported chassis. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, execute the show tech-support command and contact Cisco technical support.
fltSwVlanPortNsResourceStatus
This fault occurs when the total number of configured VLANs in the Cisco FPR instance has exceeded the allowed maximum number of configured VLANs on the fabric interconnect.
If you see this fault, take the following actions:
Step 1 In the Cisco FPR Manager CLI or Cisco FPR Manager GUI, check the port VLAN count to determine by how many VLANs the system is over the maximum.
Step 2 Reduce the VLAN port count in one of the following ways:
– Delete VLANs configured on the LAN cloud.
– Delete VLANs configured on vNICs.
– Unconfigure one or more vNICs.
– Unconfigure one or more uplink Ethernet ports on the fabric interconnect.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricLanPinGroupEmpty
This fault typically occurs when a LAN pin group does not contain any targets.
If you see this fault, add a target to the LAN pin group.
fltAdaptorExtEthIfMisConnect
Adapter [id] eth interface [id] in server [id] mis-connected
The link for a network-facing adapter interface is misconnected. Cisco FPR Manager raises this fault when any of the following scenarios occur:
- Cisco FPR Manager detects a new connectivity between a previously configured switch port or FEX port and the adapter’s external interface.
- Cisco FPR Manager detects a misconnected link between a fabric interconnect or FEX and its non-peer adapter’s interface.
If you see this fault, take the following actions:
Step 1 Check whether the adapter link is connected to a port that belongs to its peer fabric interconnect or FEX.
Step 2 If that connectivity seems correct, reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorHostEthIfMisConnect
Adapter [id] eth interface [id] in server [id] mis-connected
The link for a network-facing host interface is misconnected. Cisco FPR Manager raises this fault when any of the following scenarios occur:
- Cisco FPR Manager detects a new connectivity between a previously configured switch port and the host Ethernet interface.
- Cisco FPR Manager detects a misconnected link between the host interface and its non-peer fabric interconnect.
If you see this fault, take the following actions:
Step 1 Check whether the host Ethernet interface is connected to a port belonging to its peer fabric interconnect.
Step 2 If connectivity seems correct, reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerBudgetPowerBudgetCmcProblem
Power cap application failed for chassis [id]
This fault typically occurs when the server CIMC has failed to enforce the configured power cap.
If you see this fault, take the following actions:
Step 1 Check the power consumption of the chassis. If the chassis is consuming significantly more power than configured in the power cap, consider reducing the group cap so that the power consumption of other chassis consumption can be reduced to make up for the increase.
Step 2 If the above action did not resolve the issue, create a show tech-support file for Cisco FPR Manager and the chassis and then contact Cisco TAC.
fltPowerBudgetPowerBudgetBmcProblem
Power cap application failed for server [chassisId]/[slotId]Power cap application failed for server [id]
This fault typically occurs when the server CIMC or BIOS has failed to enforce the configured power cap.
If you see this fault, take the following actions:
Step 1 Check the power consumption of the blade server. If the server is consuming significantly more power than configured in the power cap, switch to a manual per blade cap configuration. If the power consumption is still too high, consider reducing the group cap so that the power consumption of other chassis consumption can be reduced to make up for the increase.
Step 2 If the power consumption is still too high, the CIMC or BIOS software is likely faulty.
Step 3 Create a show tech-support file for Cisco FPR Manager and the chassis and then contact Cisco TAC.
fltPowerBudgetPowerBudgetDiscFail
Insufficient power available to discover server [chassisId]/[slotId]Insufficient power available to discover server [id]
This fault typically occurs when discovery fails due to unavailable power in the group.
If you see this fault, take the following actions:
Step 1 Consider increasing the group cap.
Step 2 Reduce the number of blade servers or chassis in the Cisco FPR instance.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerGroupPowerGroupInsufficientBudget
insufficient budget for power group [name]
This fault typically occurs when the group cap is insufficient to meet the minimum hardware requirements.
If you see this fault, take the following actions:
Step 1 Consider increasing the group cap.
Step 2 Reduce the number of blade servers or chassis in the Cisco FPR instance.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerGroupPowerGroupBudgetIncorrect
admin committed insufficient for power group [name], using previous value [operCommitted]
This fault typically occurs when the group cap is insufficient to meet the minimum hardware requirements. Under these circumstances, Cisco FPR Manager uses the previously entered group cap for provisioning.
If you see this fault, take the following actions:
Step 1 Consider increasing the group cap.
Step 2 Reduce the number of blade servers or chassis in the Cisco FPR instance.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtIfMisConnect
Management Port [id] in server [id] is mis connected
This fault occurs when the server and FEX connectivity changes.
If you see this fault, take the following actions:
Step 1 Check the connectivity between the server and FEX.
Step 2 If the connectivity was changed by mistake, restore it to its previous configuration.
Step 3 If the connectivity change was intentional, reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsComputeBindingAssignmentRequirementsNotMet
Assignment of service profile [name] to server [pnDn] failed
The server could not be assigned to the selected service profile. This fault typically occurs as a result of one of the following issues:
- The selected server does not meet the requirements of the service profile.
- If the service profile was configured for restricted migration, the selected server does not match the currently or previously assigned server.
If you see this fault, select a different server that meets the requirements of the service profile or matches the currently or previously assigned server.
fltEquipmentFexPost-failure
This fault typically occurs when a FEX encounters errors during the Power On Self Test (POST). The impact of this fault varies depending on which errors were encountered during POST.
If you see this fault, take the following actions:
Step 1 Check the POST results for the FEX. In the Cisco FPR Manager GUI, you can access the POST results from the General tab for the FEX. In the Cisco FPR Manager CLI, you can access the POST results by entering the show post command under the scope for the FEX.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFexIdentity
This fault typically occurs when the FRU information for a FEX is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorHostEthIfMissing
Connection to Adapter [id] eth interface [id] in server [id] missing
The link for a network-facing host interface is missing. Cisco FPR Manager raises this fault when it detects missing connectivity between a previously configured switch port and its previous peer host interface.
If you see this fault, take the following actions:
Step 1 Check whether the adapter link is connected to a port that belongs to its non-peer fabric interconnect.
Step 2 If that connectivity seems correct, reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPortPIoInvalid-sfp
[transport] port [portId] on chassis [id] role : [ifRole] transceiver type:[xcvrType][transport] port [slotId]/[aggrPortId]/[portId] on fabric interconnect [id] role : [ifRole] transceiver type:[xcvrType][transport] port [slotId]/[portId] on fabric interconnect [id] role : [ifRole] transceiver type:[xcvrType]
This fault is raised against a fabric interconnect port, network-facing IOM port, or FEX module port if an unsupported transceiver type is inserted. The port cannot be used if it has an unsupported transceiver.
If you see this fault, replace the transceiver with a supported SFP type. Refer to the documentation on the Cisco website for a list of supported SFPs.
fltMgmtIfMissing
Connection to Management Port [id] in server [id] is missing
This fault occurs when the connectivity between a server and FEX is removed or unconfigured.
If you see this fault, take the following actions:
Step 1 Check the connectivity between the server and FEX.
Step 2 If the connectivity was changed by mistake, restore it to its previous configuration.
Step 3 If the connectivity change was intentional, reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthLanPcEpDown
[type] Member [slotId]/[aggrPortId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership][type] Member [slotId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership]
This fault typically occurs when a member port in an Ethernet port channel is down.
If you see this fault, take the following action:
Step 1 Check the link connectivity on the upstream Ethernet switch.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardThermalThresholdNonCritical
[side] IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an I/O module has exceeded a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardThermalThresholdCritical
[side] IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an I/O module has exceeded a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentIOCardThermalThresholdNonRecoverable
[side] IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an I/O module has been out of the operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisSeeprom-inoperable
Device [id] SEEPROM operability: [seepromOperState]
This fault occurs in the unlikely event that the Chassis shared storage (SEEPROM) is not operational.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltExtmgmtIfMgmtifdown
Management interface on Fabric Interconnect [id] is [operState]
This fault occurs when a fabric interconnect reports that the operational state of an external management interface is down.
If you see this fault, take the following actions:
Step 1 Check the state transitions of the external management interface on the fabric interconnect.
Step 2 Check the link connectivity for the external management interface.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerChassisMemberPowerGroupCapInsufficient
Chassis [id] cannot be capped as group cap is low. Please consider raising the cap.
This fault typically occurs when an updated group cap is insufficient to meet the minimum hardware requirements and a chassis that has just been added to the power group cannot be capped as a result.
If you see this fault, take the following actions:
Step 1 Consider increasing the group cap.
Step 2 Reduce the number of blade servers or chassis in the Cisco FPR instance.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerChassisMemberChassisFirmwareProblem
Chassis [id] cannot be capped as at least one of the CMC or CIMC or BIOS firmware version is less than 1.4. Please upgrade the firmware for cap to be applied.
This fault typically occurs when the CIMC firmware on a server is an earlier release than Cisco FPR, Release 1.4.
If you see this fault, consider upgrading the CIMC firmware, and the entire Cisco FPR instance if necessary, to Cisco FPR, Release 1.4 or later.
fltPowerChassisMemberChassisPsuInsufficient
Chassis [id] cannot be capped as at least two PSU need to be powered
This fault typically occurs when at least two PSUs are not powered on.
If you see this fault, insert at least two PSUs and power them on.
fltPowerChassisMemberChassisPsuRedundanceFailure
Chassis [id] was configured for redundancy, but running in a non-redundant configuration.
This fault typically occurs when chassis power redundancy has failed.
If you see this fault, take the following actions:
Step 1 Consider adding more PSUs to the chassis.
Step 2 Replace any non-functional PSUs.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerBudgetPowerCapReachedCommit
P-State lowered as consumption hit power cap for server [chassisId]/[slotId]P-State lowered as consumption hit power cap for server [id]
This fault typically occurs when Cisco FPR Manager is actively capping the power for a blade server.
If you see this fault, no action is needed.
fltSysdebugAutoCoreFileExportTargetAutoCoreTransferFailure
Auto core transfer failure at remote server [hostname]:[path] [exportFailureReason]
This fault occurs when Cisco Firepower Manager cannot transfer a core file to a remote TFTP server. This is typically the result of one of the following issues:
- The remote TFTP server is not accessible.
- One or more of the parameters for the TFTP server that are specified for the core export target, such as path, port, and server name, are incorrect.
If you see this fault, take the following actions:
Step 1 Verify the connectivity to the remote server.
Step 2 Verify the path information of the remote server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricMonSpanConfigFail
Configuration for traffic monitor [name] failed, reason: [configFailReason]
This fault typically occurs when the configuration of a traffic monitoring session is incorrect.
If you see this fault, correct the configuration problem provided in the fault description.
fltPowerBudgetChassisPsuInsufficient
Chassis [id] cannot be capped as the available PSU power is not enough for the chassis and the blades. Please correct the problem by checking input power or replace the PSU
This fault typically occurs when the available PSU power is not enough to deploy the power budget of chassis and blades.
If you see this fault, check the PSU input power or replace the PSU.
fltPowerBudgetTStateTransition
Blade [chassisId]/[slotId] has been severely throttled. CIMC can recover if budget is redeployed to the blade or by rebooting the blade. If problem persists, please ensure that OS is ACPI compliantRack server [id] has been severely throttled. CIMC can recover if budget is redeployed to the blade or by rebooting the blade. If problem persists, please ensure that OS is ACPI compliant
This fault typically occurs when the processor T-state is used to severely throttle the CPU.
If you see this fault, take the following actions:
Step 1 Redeploy the power budget for the affected power group, blade server, or chassis.
Step 2 If the problem persists, reboot the blade server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerPolicyPowerPolicyApplicationFail
Insufficient budget to apply no-cap priority through policy [name]. Blades will continue to be capped
This fault occurs when a power policy cannot be applied to one or more blade servers. The affected blade servers cannot operate normally without power capping due to the limited power budget for those servers.
If you see this fault, take the following actions:
Step 1 Increase the power budget for the blade servers in the power policy.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtIfNew
New connection discovered on Management Port [id] in server [id]
This fault occurs when the connectivity between a server and a FEX is added or changed.
If you see this fault, take the following actions:
Step 1 Check the connectivity between the server and FEX.
Step 2 If the connectivity was changed by mistake, restore it to its previous configuration.
Step 3 If the connectivity change was intentional, reacknowledge the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorExtEthIfMissing
Connection to Adapter [id] eth interface [id] in server [id] missing
The link for a network-facing adapter interface is misconnected. Cisco FPR Manager raises this fault when it detects that the connectivity between a previously configured port on a fabric interconnect or FEX and its prior peer network-facing adapter interface is misconnected or missing.
If you see this fault, take the following actions:
Step 1 Check whether the adapter interface is connected to a port belonging to its peer fabric interconnect or FEX.
Step 2 If the connectivity seems correct, reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageLocalDiskSlotEpUnusable
Local disk [id] on server [serverId] is not usable by the operating system
This fault occurs when the server disk drive is in a slot that is not supported by the storage controller.
If you see this fault, take the following actions:
Step 1 Insert the server disk drive in a supported slot.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthEstcPcEpDown
[type] Member [slotId]/[aggrPortId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership][type] Member [slotId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership]
This fault typically occurs when a member port in an Ethernet port channel is down.
If you see this fault, take the following action:
Step 1 Check the link connectivity on the upstream Ethernet switch.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFexIdentity-unestablishable
This fault typically occurs because Cisco FPR Manager detected an unsupported chassis. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFanModuleInoperable
Fan module [tray]-[id] in chassis [id] operability: [operability]Fan module [tray]-[id] in server [id] operability: [operability]Fan module [tray]-[id] in fabric interconnect [id] operability: [operability]
This fault occurs if a fan module is not operational.
If you see this fault, take the following actions:
Step 1 Remove and reinstall the fan module. If multiple fans are affected by this fault, remove and reinstall one fan module at a time.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsmaintMaintPolicyUnresolvableScheduler
Schedule [schedName] referenced by maintenance policy [name] does not exist
The schedule that is referenced by the maintenance policy does not exist. This fault typically occurs as a result of one of the following issues:
If you see this fault, take the following actions:
Step 1 Check if the named schedule exists. If it is deleted or missing, try to create it.
Step 2 If the named schedule is deleted or missing, recreate it.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitIdentity-unestablishable
Processor [id] on server [chassisId]/[slotId] has an invalid FRUProcessor [id] on server [id] has an invalid FRU
This fault typically occurs because Cisco FPR Manager has detected an unsupported CPU in the server. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, you may have an unsupported CPU configuration in the server. Create a show tech-support file and contact Cisco TAC.
fltIqnpoolPoolEmpty
This fault typically occurs when an IQN pool does not contain any IQNs.
If you see this fault, take the following actions:
Step 1 If the pool is in use, add a block of IQNs to the pool.
Step 2 If the pool is not in use, ignore the fault.
fltFabricDceSwSrvPcEpDown
[type] Member [slotId]/[aggrPortId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership][type] Member [slotId]/[portId] of Port-Channel [portId] on fabric interconnect [id] is down, membership: [membership]
This fault typically occurs when a member port in a fabric port channel is down.
If you see this fault, take the following action:
Step 1 Check the link connectivity between the FEX or IOM and the fabric interconnect.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEpMgrEpTransModeFail
Port constraint violation on switch [id]: [confQual]
This fault occurs when at least one logical interface is misconfigured. This can happen when upgrading to a different type or series of fabric interconnect or when importing a configuration.The configuration must meet the following constraints:
If you see this fault, take the following action:
Step 1 Create a list of all logical interfaces that are misconfigured and have caused an ’error-misconfigured’ fault.
Step 2 For each logical interface, note the reason listed in the fault for the misconfiguration.
Step 3 Log into Cisco FPR Manager and correct each misconfigured logical interface. If you used the Cisco FPR Manager CLI, commit all changes.
Step 4 Review any faults or error messages that describe additional misconfigurations and correct those errors.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricPIoEpErrorMisconfigured
Interface [name] is [operState]. Reason: [operStateReason]
This fault occurs when a logical interface is misconfigured. This can happen when upgrading to a different type or series of fabric interconnect or when importing a configuration.
If you see this fault, take the following action:
Step 1 Create a list of all logical interfaces that are misconfigured and have caused an ’error-misconfigured’ fault.
Step 2 For each logical interface, note the reason listed in the fault for the misconfiguration.
Step 3 Log into Cisco FPR Manager and correct each misconfigured logical interface. If you used the Cisco FPR Manager CLI, commit all changes.
Step 4 Review any faults or error messages that describe additional misconfigurations and correct those errors.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthLanEpMissingPrimaryVlan
Primary vlan missing from fabric: [switchId], port: [slotId]/[aggrPortId]/[portId].Primary vlan missing from fabric: [switchId], port: [slotId]/[portId].
This fault occurs when an uplink port or port channel is configured with a primary VLAN that does not exist in the Cisco FPR instance.
If you see this fault, take the following action:
Step 1 Update the configuration of the port or port channel to include a primary VLAN.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthLanPcMissingPrimaryVlan
Primary vlan missing from fabric: [switchId], port-channel: [portId].
This fault occurs when an uplink port or port channel is configured with a primary VLAN that does not exist in the Cisco FPR instance.
If you see this fault, take the following action:
Step 1 Update the configuration of the port or port channel to include a primary VLAN.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltVnicEtherPinningMismatch
Hard pinning target for eth vNIC [name], service profile [name] does not have all the required vlans configured
This fault occurs when one or more VLANs required by vNIC in a service profile are not configured on the target uplink port or port channel for a hard-pinned LAN pin group.
If you see this fault, take the following actions:
Step 1 In the LAN Uplinks Manager of the Cisco FPR Manager GUI, configure all of the VLANs in the vNIC in the target uplink port or port channel for the LAN pin group. If you prefer to use the Cisco FPR Manager CLI, navigate to scope /eth-uplink/vlan and create the required member ports for the LAN pin group.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltVnicEtherPinningMisconfig
Hard pinning target for eth vNIC [name], service profile [name] is missing or misconfigured
This fault occurs when one or more vNIC target uplink ports or port channels for a hard-pinned LAN pin group are either missing or misconfigured as the wrong port type.
If you see this fault, take the following actions:
Step 1 Review the LAN pin group configuration.
Step 2 Correct the configuration of the port and port channels in the pin group.
Step 3 Ensure that all required vLANs are allowed on the target ports or port channels.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltProcessorUnitDisabled
Processor [id] on server [chassisId]/[slotId] operState: [operState]Processor [id] on server [id] operState: [operState]
This fault occurs in the unlikely event that a processor is disabled.
If you see this fault, take the following actions:
Step 1 If this fault occurs on a blade server, remove and reinsert the server into the chassis.
Step 2 In Cisco FPR Manager, decommission and recommission the blade server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMemoryUnitDisabled
DIMM [location] on server [chassisId]/[slotId] operState: [operState]DIMM [location] on server [id] operaState: [operState]
This fault is raised when the server BIOS disables a DIMM. The BIOS could disable a DIMM for several reasons, including incorrect location of the DIMM or incompatible speed.
If you see this fault, refer to the Cisco FPR B-Series Troubleshooting Guide for information on how to resolve the DIMM issues.
fltFirmwareBootUnitActivateStatusFailed
Activation failed and Activate Status set to failed.
This fault typically occurs for the following reasons: when firmware activation fails, or if the after activation running image is not the corresponding startup image.
- Firmware activation failed.
- The version of firmware running on the server after activation is not the version listed in Cisco FPR Manager as the startup image.
If you see this fault, take the following actions:
Step 1 Go to FSM tab for the endpoint on which the fault is raised and review the error description for the reason that the activation failed.
Step 2 If the FSM failed, review the error message in the FSM.
Step 3 If possible, correct the problem described in the error message.
Step 4 If the problem persists, create a show tech-support file and contact Cisco TAC.
fltFabricInternalPcDown
[type] port-channel [portId] on fabric interconnect [id] oper state: [operState], reason: [stateQual]
This fault occurs when the transport VIF for a server is down. Cisco FPR Manager raises this fault when a fabric interconnect reports the connectivity state on virtual interface as one of the following:
If you see this fault, take the following actions:
Step 1 Verify that the blade server discovery was successful.
Step 2 Check the states on all communicating ports from end to end.
Step 3 If connectivity seems correct, decommission and recommission the server.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityDevice-1-shared-storage-error
device [chassis1], error accessing shared-storage
This fault occurs in an unlikely event that the shared storage selected for writing the cluster state is not accessible. This fault is typically a transient fault. You might see this fault when one of the following occurs: (a) the Fabric Interconnect boots, (b) the IO Module is reset, (c) the rack server is reboot, or (d) system is upgraded/downgraded. If this fault is not cleared after the system returns to normal operation following the reboot/reset/upgrade/downgrade, then it may affect the full HA functionality of the Fabric Interconnect cluster.
If this fault is not cleared even after the system returns to normal operation, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityDevice-2-shared-storage error
device [chassis2], error accessing shared-storage
This fault occurs in an unlikely event that the shared storage selected for writing the cluster state is not accessible. This fault is typically a transient fault. You might see this fault when one of the following occurs: (a) the Fabric Interconnect boots, (b) the IO Module is reset, (c) the rack server is reboot, or (d) system is upgraded/downgraded. If this fault is not cleared after the system returns to normal operation following the reboot/reset/upgrade/downgrade, then it may affect the full HA functionality of the Fabric Interconnect cluster.
If this fault is not cleared even after the system returns to normal operation, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityDevice-3-shared-storage error
device [chassis3], error accessing shared-storage
This fault occurs in an unlikely event that the shared storage selected for writing the cluster state is not accessible. This fault is typically a transient fault. You might see this fault when one of the following occurs: (a) the Fabric Interconnect boots, (b) the IO Module is reset, (c) the rack server is reboot, or (d) system is upgraded/downgraded. If this fault is not cleared after the system returns to normal operation following the reboot/reset/upgrade/downgrade, then it may affect the full HA functionality of the Fabric Interconnect cluster.
If this fault is not cleared even after the system returns to normal operation, create a show tech-support file and contact Cisco TAC.
fltMgmtEntityHa-ssh-keys-mismatched
Fabric Interconnect [id], management services, mismatched SSH keys
This fault indicates that one of the following scenarios has occurred:
- The internal SSH keys used for HA in the cluster configuration are mismatched. This causes certain operations to fail.
- Another fabric interconnect is connected to the primary fabric interconnect in the cluster without first erasing the existing configuration in the primary.
If you see this fault, take the following actions:
Step 1 Log into the Cisco FPR Manager CLI on the subordinate fabric interconnect.
Step 2 Enter connect local-mgmt
Step 3 Enter erase configuration to erase the configuration on the subordinate fabric interconnect and reboot it.
Step 4 When the secondary fabric interconnect has rebooted, reconfigure it for the cluster.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeBoardPowerFail
Motherboard of server [chassisId]/[slotId] (service profile: [assignedToDn]) power: [power]Motherboard of server [id] (service profile: [assignedToDn]) power: [power]
This fault typically occurs when the power sensors on a blade server detect a problem.
If you see this fault, take the following actions:
Step 1 Remove the blade server from the chassis.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltVmVifLinkState
Virtual interface [vifId] link is down; reason [stateQual]
This fault occurs when Cisco FPR cannot send or receive data through an uplink port.
If you see this fault, take the following actions:
Step 1 Enable the failed uplink port.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuPowerSupplyShutdown
Power supply [id] in chassis [id] shutdown reason:[powerStateQualifier]
This fault typically occurs when a power supply unit in a chassis, fabric interconnect, or a FEX is shut down, either due to higher than expected power current, higher than expected temperatures, or the failure of a fan.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the server.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the servers have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis or rack server are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Verify that the power cord is properly connected to the PSU and the power source.
Step 7 Verify that the power source is 220 volts.
Step 8 Verify that the PSU is properly installed in the chassis or fabric interconnect.
Step 9 Remove the PSU and reinstall it.
Step 11 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuPowerThreshold
Power supply [id] on chassis [id] has exceeded its power thresholdPower supply [id] on server [id] has exceeded its power threshold
This fault occurs when a power supply unit is drawing too much current.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltEquipmentPsuInputError
Power supply [id] on chassis [id] has disconnected cable or bad input voltagePower supply [id] on server [id] has disconnected cable or bad input voltage
This fault occurs when a power cable is disconnected or input voltage is incorrect.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltNetworkElementInventoryFailed
Fabric Interconnect [id] inventory is not complete [inventoryStatus]
Cisco FPR Manager raises this fault when the management subsystem is unable to perform an inventory of the physical components, such as I/O cards or physical ports.
If you see this fault, take the following actions:
Step 1 Ensure that both fabric interconnects in an HA cluster are running the same software versions.
Step 2 Ensure that the fabric interconnect software is a version that is compatible with the Cisco FPR Manager software.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorUnitExtnUnidentifiable-fru
Adapter extension [id] in server [chassisId]/[slotId] has unidentified FRU
This fault typically occurs because Cisco FPR Manager has detected an unsupported adapter unit extension, such as a pass-through adaptor. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that a supported adapter unit extension is installed.
Step 2 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltAdaptorUnitExtnMissing
Adapter extension [id] in server [chassisId]/[slotId] presence: [presence]
This fault typically occurs when an I/O adapter unit extension, such as a pass-through adapter, is missing. Cisco FPR Manager raises this fault when any of the following scenario occur:
- The endpoint reports there is no adapter unit extension, such as a pass-through adapter, plugged into the adapter slot.
- The endpoint cannot detect or communicate with the adapter unit extension plugged into the adapter slot.
If you see this fault, take the following actions:
Step 1 Ensure the adapter unit extension is properly plugged into an adapter slot in the server.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentFexFex-unsupported
Fex [id] with model [model] is unsupported
This fault typically occurs because Cisco FPR Manager has detected an unsupported FEX. For example, the model, vendor, or revision is not recognized.
If you see this fault, take the following actions:
Step 1 Verify that a supported FEX is installed.
Step 2 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltVnicIScsiConfig-failed
iSCSI vNIC [name], service profile [name] has duplicate iqn name [initiatorName]
This fault typically occurs when IScsi Vnics refer the same iqn name.
If you see this fault, take the following actions:
Step 1 Make sure that iqn name unique per iSCSI vnic.
Step 2 Using show identity iqn check if the iSCSI vnic is registered in the universe.
Step 3 Try non disruptive actions such as changing description on the Service Profile to register the iqn in the universe.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPkiKeyRingStatus
[name] Keyring’s certificate is invalid, reason: [certStatus].
This fault occurs when certificate status of Keyring has become invalid.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltPkiTPStatus
[name] Trustpoint’s cert-chain is invalid, reason: [certStatus].
This fault occurs when certificate status of TrustPoint has become invalid.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltComputePhysicalDisassociationFailed
Failed to disassociate server [id]Failed to disassociate server [chassisId]/[slotId]
This fault typically occurs for one of the following reasons:
- The server is down.
- The data path is not working.
- Cisco FPR Manager cannot communicate with one or more of the fabric interconnect, the server, or a component on the server.
If you see this fault, take the following actions:
Step 1 Check the communication path to the server including fabric interconnect server ports, IOM link and the current state of the server
Step 2 If the server is stuck in an inappropriate state, such as booting, power cycle the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputePhysicalNetworkMisconfigured
Server [id] (service profile: [assignedToDn]) has mis-configured network vif resourcesServer [chassisId]/[slotId] (service profile: [assignedToDn]) has mis-configured network vif resources
This fault would occur when FPRM VIF-id Map is not the same as the VIF-id map deployed on the adaptor upon Full Backup-Restore etc.
If you see this fault, take the following actions:
Step 1 Re-acknowledge the server. This will trigger Deep Discovery-Deep Association & will resolve the issue
Step 2 If the above actions did not resolve the issue, execute the show tech-support command and contact Cisco Technical Support.
fltVnicProfileProfileConfigIncorrect
The Port Profile [name] has an invalid configuration.
This fault occurs there is an invalid entry for a port profile configuration.
Check documentation and correct the offending entry in the port profile configuration.
fltVnicEtherIfVlanAccessFault
The named vlan [name] for vNIC [name] cannot be accessed from org [name]
This fault typically occurs when a Service Profile’s vnic interface (LAN) is resolvable but the service profile does not have access to the vlan. In this case, the default vlan will be used.
This fault will be removed if you perform one of the following actions:
Step 1 Change the vnic’s interface name to a VLAN that you have access to.
Step 2 If you wish to use the default vlan, change the vnic’s interface name to default.
Step 3 Configure access to the named vlan by creating a vlan permit or vlan group permit in the service profile’s org (or a parent org).
fltVnicEtherIfVlanUnresolvable
The named vlan [name] for vNIC [name] cannot be resolved
This fault (warning) occurs when a Service Profile’s vnic interface (LAN) is unresolvable. In this case, the default vlan will be used as the operational vlan.
This fault will be removed if you perform one of the following actions:
Step 1 Change the vnic interface name to an existing VLAN.
fltVnicEtherIfInvalidVlan
Invalid Vlan in the allowed vlan list
This fault typically occurs when a vnic of a service profile or a port profile contains an invalid vlan. an invalid vlan can be any one of the following:
Step 1 an isolated vlan or a community vlan that is not associated to a valid primary vlan
Step 2 a primary vlan without any of its assoicated secondary vlans allowed on the vnic
Step 3 a vlan which has sharing-type or primary vlan name not matching to that of vlan in lan-side/appliance-side
This fault will be removed if you perform one of the following actions:
Step 1 if invalid vlan is an isolated or community vlan then make sure it is mapped to a valid primary vlan.
Step 2 if invalid vlan is a primary vlan then either allow any of its secondary vlans or delete it from vnic or port profile.
Step 3 if invalid vlan is a vlan that does not match the sharing properties with the vlan of same vlan id in the lan-side/appliance-side, change the properties of this vlan to be the same as the other.
fltFabricVlanVlanConflictPermit
There are multiple vlans with id [id] have different accessability configured.
This fault occurs when multipl global vlans with the same id have different access configurations.
Change the access configuration by configuring VLAN/VLAN Group Permits.
fltFabricVlanReqVlanPermitUnresolved
The VLAN permit does not reference any existing vlans.
This fault occurs when a VLAN permit exists but there are no vnics by the name.
Delete the VLAN permit, create the referenced VLAN (or ignore).
fltFabricVlanGroupReqVlanGroupPermitUnresolved
The VLAN permit does not reference any existing net groups.
This fault occurs when a VLAN group permit exists but there are no referenced network groups.
Delete the VLAN permit, create the referenced VLAN (or ignore).
fltExtpolClientClientLostConnectivity
FPRM has lost connectivity with Firepower Central
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltStorageLocalDiskDegraded
Local disk [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Local disk [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the local disk has become degraded. The fault description will contain the physical drive state, which indicates the reason for the degradation.
If you see this fault, take the following actions:
Step 1 If the drive state is "rebuild" or "copyback", wait for the rebuild or copyback operation to complete.
Step 2 If the drive state is "predictive-failure", replace the disk.
fltStorageRaidBatteryDegraded
RAID Battery on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]RAID Battery on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the RAID backup unit is degraded.
If you see this fault, take the following actions:
Step 1 If the fault reason indicates the backup unit is in a relearning cycle, wait for relearning to complete.
Step 2 If the fault reason indicates the backup unit is about to fail, replace the backup unit.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageRaidBatteryRelearnAborted
RAID Battery on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]RAID Battery on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when the backup unit’s relearning cycle was aborted.
If you see this fault, take the following actions:
Step 2 Replace the backup unit.
fltStorageRaidBatteryRelearnFailed
RAID Battery on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]RAID Battery on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when the backup unit’s relearning cycle has failed.
If you see this fault, take the following actions:
Step 2 Replace the backup unit.
fltStorageInitiatorConfiguration-error
Initiator [name] either cannot be resolved or does not match with one of the storage targets. No zones are deployed for this initiator and the target.
Initiator either cannot be resolved or does not match with one of the targets.
If you see this fault, take the following action:
Step 1 Check if vhba interface referenced by this Initiator exsits.
Step 2 Check if switch id or vsan name of the vhba interface referenced by this Initiator matches one of the targets.
fltStorageControllerPatrolReadFailed
Controller [id] on server [chassisId]/[slotId] had a patrol read failure. Reason: [operQualifierReason]Controller [id] on server [id] had a patrol read failure. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when a Patrol Read operation has failed.
Re-run the patrol read operation.
fltStorageControllerInoperable
Controller [id] on server [chassisId]/[slotId] is inoperable. Reason: [operQualifierReason]Controller [id] on server [id] is inoperable. Reason: [operQualifierReason]
This fault occurs when the storage controller is inaccessible.
For PCI and mezz-based storage controllers, check the seating of the storage controller. If the problem persists, replace the controller.
fltStorageLocalDiskRebuildFailed
Local disk [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Local disk [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when a rebuild operation has failed. This may cause a degradation in performance.
If you see this fault, take the following action:
Step 1 Retry the rebuild operation.
fltStorageLocalDiskCopybackFailed
Local disk [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Local disk [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when a copyback operation has failed. This may cause a degradation in performance.
If you see this fault, take the following action:
Step 1 Retry the copyback operation.
fltStorageVirtualDriveInoperable
Virtual drive [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Virtual drive [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the virtual drive has become inoperable.
If you see this fault, take the following actions:
Step 1 Verify the presence and health of disks that are used by the virtual drive.
Step 2 If applicable, reseat or replace used disks.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageVirtualDriveDegraded
Virtual drive [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Virtual drive [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
This fault occurs when the virtual drive has become degraded. The fault description will contain the physical drive state, which indicates the reason for the degradation.
If you see this fault, take the following actions:
Step 1 If the drive is performing a consistency check operation, wait for the operation to complete.
Step 2 Verify the presence and health of disks that are used by the virtual drive.
Step 3 If applicable, reseat or replace used disks.
Step 4 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageVirtualDriveReconstructionFailed
Virtual drive [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Virtual drive [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when a drive reconstruction operation has failed. This may cause a degradation in performance.
If you see this fault, take the following action:
Step 1 Retry the reconstruction operation.
Step 2 Delete and recreate the virtual drive.
fltStorageVirtualDriveConsistencyCheckFailed
Virtual drive [id] on server [chassisId]/[slotId] operability: [operability]. Reason: [operQualifierReason]Virtual drive [id] on server [id] operability: [operability]. Reason: [operQualifierReason]
NOTE: This fault is not currently implemented by Firepower ManagerThis fault is present only as a placeholder, possibly for another release,such as stand-alone rack servers.---This fault occurs when a drive consistency check operation has failed. This may cause a degradation in performance.
If you see this fault, take the following action:
Step 1 Retry the consistency check operation.
Step 2 Delete and recreate the virtual drive.
fltAaaProviderGroupProvidergroup
For [dn]: Server Group with name [name] already exist, You need to specify a unique name for this object.
This fault typically occurs because Cisco FPR Manager has detected multiple provider-groups with same name.
If you see this fault, take the following actions:
Step 1 Need to delete the duplicate provider group configured causing this problem.
fltAaaConfigServergroup
For [dn]: [realm] Server Group with name [providerGroup] doesn’t exist or is not deployed.
This fault typically occurs because Cisco FPR Manager has detected an unsupported authentication method.
If you see this fault, take the following actions:
Step 1 Verify that server group configured for authentication is present.
Step 2 If the server group is not configured, create the server group to use for authentication.
fltAaaRoleRoleNotDeployed
Role [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an unsupported role.
If you see this fault, take the following actions:
Step 1 Verify that total number of roles is less than maximum supported roles.
Step 2 Verify that sum of privileges across all roles is less than maximum privileges sum.
fltAaaLocaleLocaleNotDeployed
Locale [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an unsupported locale.
If you see this fault, take the following actions:
Step 1 Verify that total number of locale is less than maximum supported roles.
fltAaaUserRoleUserRoleNotDeployed
For user: [name] role [name] can’t be assigned. Error: [configStatusMessage].For Ldap Group: [name] role [name] can’t be assigned. Error: [configStatusMessage].
This fault typically occurs because Cisco FPR Manager has detected an unsupported user role for ldap groups or local users.
If you see this fault, take the following actions:
Step 1 Verify that the role is present.
Step 2 Verify that the role is applied.
Step 3 Verify that the role is compatible with locales assigned to ldap group or local user.
fltAaaUserLocaleUserLocaleNotDeployed
For user: [name] locale [name] can’t be assigned. Error: [configStatusMessage].For Ldap Group: [name] locale [name] can’t be assigned. Error: [configStatusMessage].
This fault typically occurs because Cisco FPR Manager has detected an unsupported user locale for ldap groups or local users.
If you see this fault, take the following actions:
Step 1 Verify that the locale is present.
Step 2 Verify that the locale is applied.
Step 3 Verify that the locale is compatible with roles assigned to ldap group or local user.
fltPkiKeyRingKeyRingNotDeployed
Keyring [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an invalid Keyring.
If you see this fault, take the following actions:
Step 1 Verify that the trust point configured for this keyring is present.
Step 2 Verify that the trust point found above is applied.
fltCommSnmpSyscontactEmpty
FPR Manager cannot deploy an empty value of SNMP Syscontact when Callhome is enabled. The previous value [sysContact] for SNMP Syscontact has been retained.
This fault typically occurs when FPR Manager receives an invalid configuration from FPR Central wherein SNMP Syscontact is set to empty when Callhome is enabled.
If you see this fault, please ensure that the SNMP Syscontact field on FPR Central is configured correctly for the domain group corresponding to this FPRM.
fltCommDateTimeCommTimeZoneInvalid
Timezone:[timezone] is invalid
This fault typically occurs because Cisco FPR Manager has detected an unsupported role.
If you see this fault, take the following actions:
Step 1 Verify that total number of roles is less than maximum supported roles.
Step 2 Verify that sum of privileges across all roles is less than maximum privileges sum.
fltAaaUserLocalUserNotDeployed
Local User [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an invalid system user.
If you see this fault, take the following actions:
Step 1 Verify that local user name is not used by snmp users.
fltCommSnmpUserSnmpUserNotDeployed
SNMP User [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an invalid snmp user.
If you see this fault, take the following actions:
Step 1 Verify that snmp user name is not used by system users.
fltCommSvcEpCommSvcNotDeployed
Communication Service configuration can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an invalid communication policy confiuration.
If you see this fault, take the following actions:
Step 1 Verify that ports configured across all communication services is unique.
fltSwVlanPortNsVLANCompNotSupport
VLAN Port Count Optimization is not supported
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltPolicyControlEpSuspendModeActive
FPRM is suspended from receiving updates from FPR Central.
This fault occurs when FPRM enters into suspend state from receiving updates from FPR Central that it is registered with.
If you see this fault, take the following actions:
Step 1 Please check if FPR Central is restored to a previous version or a policy roll-back has occured. You may have brought FPR in to manual suspension mode by using set suspendstate on command under the system-control-ep policy scope.
Step 2 Please confirm the suspend state by using show control-ep policy detail under system scope. If you still want to receive the updates from FPR Central, you need to restore it back to a version compatible with FPRM or set the suspend state to off by acknowledging it by using set ackstate acked under policy-control scope.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltNetworkElementThermalThresholdCritical
Fabric Interconnect [id] temperature: [thermal]
This fault occurs when the temperature of a Fabric Interconnect exceeds a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the Fabric Interconnect.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the Fabric Interconnects have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 6 Replace faulty Fabric Interconnects.
Step 7 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricPinTargetDown
Pin target is a non-existent interface
This fault typically occurs when a PinGroup has an unresolvable target.
If you see this fault, take the following action:
Step 1 Check whether the PinGroup target is correctly provisioned.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthLanEpOverlapping-vlan
On Fabric: [switchId], Port: [slotId]/[aggrPortId]/[portId] following overlapping VLANs detected: [overlappingVlans]On Fabric: [switchId], Port: [slotId]/[portId] following overlapping VLANs detected: [overlappingVlans]
This fault occurs when Overlapping Vlans occur due to mis configuration.
Ports configured on Vlans belonging to a group should not intersect with other ports of different network group belonging to Vlans which overlap.
fltFabricEthLanPcOverlapping-vlan
Overlapping VLANs detected on Fabric: [switchId], Port: [portId] in configured VLANs: [overlappingVlans]
This fault occurs when Overlapping Vlans occur due to mis configuration.
Ports configured on Vlans belonging to a group should not intersect with other ports of different network group belonging to Vlans which overlap.
fltFabricVlanMisconfigured-mcast-policy
VLAN [name] multicast policy [mcastPolicyName] is non-default.
This fault is raised when VLAN belonging to a Springfield fabric has a non-default multicast policy assigned to it.
If you see this fault, take the following action:
Step 1 Un-assign multicast policy for the this vlan or change the multicast policy to default.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtConnectionDisabled
Management Connection [type] in server [id] is not operational
This fault occurs when multiple management connections are acknowledegd.
If you see this fault, take the following actions:
Step 1 Disable the management connection which is unused.
Step 2 If new management connection needs to be used, decommission and recommission server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtConnectionUnused
Management Connection [type] in server [id] is unused
This fault occurs when a management connection is not enabel
If you see this fault, you can enable the connection if none of the management connections are enabled. Else this can be ignored
fltMgmtConnectionUnsupportedConnectivity
Unsupported connectivity for management connection [type] in server [id]
This fault typically occurs because Cisco FPR Manager has detected that the physical connectivity of the management port of the server is unsupported.
If you see this fault, take the following actions:
Step 1 Connect the management port/s of the rack mount server to the Fabric Extender/s
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltCallhomeEpNoSnmpPolicyForCallhome
FPR Manager cannot apply Callhome policy if SNMP Policy is not configured or if SNMP Syscontact has an empty value. The Callhome policy from FPR Central has not been applied.
This fault typically occurs when FPR Manager receives an invalid configuration from FPR Central wherein Callhome is configured on FPR Central but there is no SNMP Syscontact defined locally.
If you see this fault, please ensure that the SNMP Policy is configured on FPRM Manager, either locally or via FPR Central.
fltCapabilityCatalogueLoadErrors
Load errors: File parse errors: [fileParseFailures], provider load failures: [providerLoadFailures], XML element load errors: [loadErrors].
The capability catalog failed to load fully. This may be caused by either a faulty FPRM image or a faulty catalog image.
If you see this fault, take the following actions:
Step 1 Check the version of the capability catalog.
Step 2 Contact Cisco TAC to see if there are known issues with the catalog and if there is a catalog image that will fix the known issues.
fltExtmgmtArpTargetsArpTargetsNotValid
Invalid ARP Targets configured for Management Interface Polling. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an invalid ArpTargets Configuration.
If you see this fault, take the following actions:
Step 1 Verify that Arp target ip address and external management ip address are in the same subnet.
Step 2 Verify that Arp target ip address is not the same as ip address of this system’s fabric-interconnects.
Step 3 Verify that Arp target ip address is not the same as virtual IP Address.
fltExtpolClientGracePeriodWarning
FPR domain [name] registered with FPR Central has entered into the grace period.
A FPR domain is registered with FPR Central without having a license. This fault typically occurs if this FPR domain is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for the FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under license scope from service-reg session.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning2
FPR Domain [name] registered with FPR Central is running in the grace period for more than 10 days
This FPR domain is registered with FPR Central without having a license. This fault typically occurs if this FPR domain is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for the FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning3
FPR Domain [name] registered with FPR Central is running in the grace period for more than 30 days
This FPR Domain registered with FPR Central has been running in the grace period for more than 30 days. This fault typically occurs if this FPR domain is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains and the unlicensed FPR Domains have been running for more than 120 days.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Manager GUI, you can access the licensing information from the Operations Management tab for the FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning4
FPR Domain [name] registered with FPR Central is running in the grace period for more than 60 days
This FPR Domain registered with FPR Central has been running in the grace period for more than 60 days. This fault typically occurs if this FPR domain is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains and the unlicensed FPR Domains have been running for more than 60 days.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for the FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning5
FPR Domain [name] registered with FPR Central is running in the grace period for more than 90 days
This FPR Domain registered with FPR Central has been running in the grace period for more than 90 days. This fault typically occurs if this FPR domains is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains and the unlicensed FPR Domains have been running for more than 90 days.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed by FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for the FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning6
FPR Domain [name] registered with FPR Central is running in the grace period for more than 119 days
This FPR Domain registered with FPR Central has been running in the grace period for more than 119 days. This fault typically occurs if this FPR domain is registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains and the unlicensed FPR Domains have been running for more than 119 days.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtpolClientGracePeriodWarning7
Grace period for FPR Domain [name] registered with FPR Central has expired. Please acquire a license for the same.
This FPR Domain registered with FPR Central has been running in the grace period for more than 120 days. FPR domains are registered with FPR Central after all default (and procured) licenses are assigned to other FPR domains and the unlicensed FPR Domains have been running for more than 120 days. At this stage, the system licensing state is set to expired.
If you see this fault, take the following actions:
Step 1 Check the number of licenses installed and consumed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 Disable the unlicensed FPR Domains to bring the number of enabled Domains down to the number of total licenses.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC immediately to procure more licenses.
fltExtpolClientGracePeriodWarning1
FPR Domain [name] is registered with FPR Central without a valid license.
This FPR domain is registered with FPR Central without having a license. This fault typically occurs if this FPR domain is registered with FPR Central without the initial activation license and after all default licenses are assigned to other FPR domains.
If you see this fault, take the following actions:
Step 1 Check if the initial activation license is installed on FPR Central. In the Cisco FPR Central GUI, you can access the licensing information from the Operations Management tab for FPR Central. In the Cisco FPR Central CLI, you can access the licensing information by entering the show usage detail command under the license scope.
Step 2 Disable the unlicensed FPR Domains to bring the number of enabled Domains down to the number of total licenses.
Step 3 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC immediately to procure more licenses.
fltStorageItemFilesystemIssues
Partition [name] on fabric interconnect [id] has file system errors
This fault occurs when the partition develops faults
If you see this fault, take the following actions:
Step 1 Create a show tech-support file and contact Cisco TAC.
fltPkiKeyRingModulus
[name] Keyring’s RSA modulus is invalid.
This fault occurs when RSA keyring is created without modulus set.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltAaaOrgLocaleOrgNotPresent
Locale Org [name] can’t be deployed. Error: [configStatusMessage]
This fault typically occurs because Cisco FPR Manager has detected an unidentified org reference.
If you see this fault, take the following actions:
Step 1 Verify that the org dn referenced in this Org is exists, if not create the same.
fltNetworkOperLevelExtraprimaryvlans
Fabric Interconnect [id]: Number of primary vlans exceeds the max limit on the FI: Number of Primary Vlans: [primaryVlanCount] and Max primary vlans allowed: [maxPrimaryVlanCount]
This fault occurs when the fabric interconnect has more number of primary vlans than what is supported.
If you see this fault, take the following actions:
Step 1 It is recommended that operator should delete the extra primary vlans than are there in the FI. System may appear to be normally functioning even with these extra primary vlans in place. However there may be performance issues observed as the system is operating above the recommended scale limits..
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentHealthLedCriticalError
Health LED of server [chassisId]/[slotId] shows error. Reason: [healthLedStateQualifier]Health LED of server [id] shows error. Reason: [healthLedStateQualifier]
This fault is raised Blade LED changes to amber blinking
If you see this fault, take the following actions:
Step 1 Read fault summary and determine course of action.
fltEquipmentHealthLedMinorError
Health LED of server [chassisId]/[slotId] shows error. Reason: [healthLedStateQualifier]Health LED of server [id] shows error. Reason: [healthLedStateQualifier]
This fault is raised Blade LED changes to amber
If you see this fault, take the following actions:
Step 1 Read fault summary and determine course of action.
fltVnicEtherIfRemoteVlanUnresolvable
The named vlan [name] for vNIC [name] cannot be resolved remotely
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltVnicEtherVirtualization-conflict
Multiple connection policies cannot be assigned to the same Eth vNIC
This fault occurs when multiple connection policies are assigned to the same vNIC.
If you see this fault, take the following actions:
Step 1 Check on the vNIC if different types of connection policies (dynamic/VMQ) are assigned. Keep only one type.
Step 2 Check on the vNIC through CLI if more than one connection policy of the same type is assigned. Keep only one connection policy.
fltLsIssuesIscsi-config-failed
Service Profile [name] configuration failed due to iSCSI issue [iscsiConfigIssues]
This fault typically occurs when Cisco FPR Manager Service Profile configuration failed due to iSCSI Config Issues.
If you see this fault, take the following actions:
Step 1 Correct the Service Profile iSCSI Configuration as per the issue reported.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageLocalDiskMissing
Local disk [id] missing on server [chassisId]/[slotId]Local disk [id] missing on server [id]
This fault occurs when a disk is missing.
If you see this fault, take the following action:
fltStorageFlexFlashControllerInoperable
FlexFlash Controller [id] on server [chassisId]/[slotId] is inoperable. Reason: [operQualifierReason] Status: [controllerHealth]FlexFlash Controller [id] on server [id] is inoperable. Reason: [operQualifierReason] Status: [controllerHealth]
This fault occurs when the flexflash controller is inaccessible.
If you see this fault, take the following action:
Step 1 If reported as Firmware Mismatch, update the CIMC and Board Controller firmware
Step 2 If reported as Fatal Error, reset the CIMC and update Board Controller firmware
Step 3 For PCI and mezz-based controllers, check the seating of the storage controller. If the problem persists, replace the controller
fltStorageFlexFlashCardInoperable
FlexFlash Card [slotNumber] on server [chassisId]/[slotId] is inoperable. Reason: [operQualifierReason]FlexFlash Card [slotNumber] on server [id] is inoperable. Reason: [operQualifierReason]
This fault occurs when the flexflash card is inaccessible.
If you see this fault, take the following action:
Step 1 If reported as Write Protected, then remove write protection from the card
Step 2 If reported as Invalid Capacity, use an OS disk utility to delete/recreate the partitions
Step 3 If the above action did not resolve the issue, replace the card
fltStorageFlexFlashCardMissing
FlexFlash Card [slotNumber] missing on server [chassisId]/[slotId]FlexFlash Card [slotNumber] missing on server [id]
This fault occurs when a FlexFlash Card is missing.
If you see this fault, take the following action:
fltStorageFlexFlashVirtualDriveDegraded
FlexFlash Virtual Drive RAID degraded on server [chassisId]/[slotId]. Reason: [raidState]FlexFlash Virtual Drive RAID degraded on server [id]. Reason: [raidState]
This fault occurs when the flexflash raid is degraded.
If you see this fault, take the following action:
Step 1 Re-acknowledge the server by setting the flexflash scrub policy to yes. Please note that this action will erase all data in the card(s)
Step 2 Verify the health of the controller/card(s). If the above action did not resolve the issue, replace the card(s)
fltStorageFlexFlashVirtualDriveInoperable
FlexFlash Virtual Drive on server [chassisId]/[slotId] is inoperable. Reason: [raidState]FlexFlash Virtual Drive on server [id] is inoperable. Reason: [raidState]
This fault occurs when the flexflash virtual drive is inoperable.
If you see this fault, take the following action:
Step 1 Re-acknowledge the server by setting the flexflash scrub policy to yes. Please note that this action will erase all data in the card(s)
Step 2 Verify the health of the controller/card(s). If the above action did not resolve the issue, replace the card(s)
fltStorageFlexFlashControllerUnhealthy
FlexFlash Controller [id] on server [chassisId]/[slotId] is unhealthy. Reason: [operQualifierReason] Status: [controllerHealth]FlexFlash Controller [id] on server [id] is unhealthy. Reason: [operQualifierReason] Status: [controllerHealth]
This fault occurs when the flexflash controller is unhealthy.
If you see this fault, take the following action:
Step 1 If reported as Old Firmware/Firmware Mismatch, update the CIMC and Board Controller firmware, reboot the server
Step 2 Re-acknowledge the server by setting the flexflash scrub policy to yes. Please note that this action will erase all data in the card(s)
Step 3 Verify the health of the controller. If the above action did not resolve the issue, replace the card(s)
fltAaaProviderGroupProvidergroupsize
For [dn]: Server Group [name] has [size] provider references. Authentication might fail, if this provider group is used with auth-domain.
This fault typically occurs because Cisco FPR Manager has detected provider-group with 0 provider references..
If you see this fault, take the following actions:
Step 1 Need to delete the provider group which does not have any provider references.
Step 2 Or Add provider references to provider group.
fltFirmwareAutoSyncPolicyDefaultHostPackageMissing
Default host firmware package is missing or deleted.
This fault typically occurs for the following reasons: when Auto Firmware Sync Policy is set Auto-acknowledge or User-acknowledge and default host firmware pack is not available.
If you see this fault, take the following actions:
Step 1 Go to Servers tab and expand policies node. Select Host Firmware Packages under policies node.
Step 2 If the FSM failed, review the error message in the FSM.
Step 3 Create a host firmware package with name ’default’. If the problem persists, create a show tech-support file and contact Cisco TAC.
fltFabricNetflowMonSessionFlowMonConfigFail
Configuration for traffic flow monitor [name] failed, reason: [configFailReason]
This fault typically occurs when the configuration of a traffic flow monitoring session is incorrect.
If you see this fault, correct the configuration problem provided in the fault description.
fltFabricNetflowMonSessionNetflowSessionConfigFail
Netflow session configuration failed because [configQualifier]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFabricPooledVlanNamedVlanUnresolved
VLAN [name] for VLAN group [name] cannot be resolved to any existing vlans.
This fault typically occurs when a named VLAN in VLAN group cannot be resolved to any existing vlans.
If you see this fault, take the following action:
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltExtvmmVMNDRefVmNetworkReferenceIncorrect
VM Network [name] references [vmNetworkDefName] that is already being referenced by another VM Network
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltExtmgmtNdiscTargetsNdiscTargetsNotValid
Invalid NDISC Targets configured for Management Interface Polling. Error: [configStatusMessage]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFirmwareBootUnitPowerCycleRequired
Board controller upgraded, manual a/c power cycle required on server [serverId]
If you see this fault, take the following actions:
Step 1 Power cycle the board controller.
fltMgmtControllerUnsupportedDimmBlacklisting
Dimm blacklisting is not supported on server [chassisId]/[slotId]Dimm blacklisting is not supported on server [id]
This fault typically occurs when the CIMC firmware on a server is an earlier release than Cisco FPR, Release 2.2.
If you see this fault, consider upgrading the CIMC firmware, and the entire Cisco FPR instance if necessary, to Cisco FPR, Release 2.2 or later.
fltFabricEthLanEpUdldLinkDown
UDLD state for ether port [slotId]/[aggrPortId]/[portId] on fabric interconnect [switchId] is: [udldOperState].UDLD state for ether port [slotId]/[portId] on fabric interconnect [switchId] is: [udldOperState].
This fault occurs when an ethernet uplink port is unidirectional connected.
If you see this fault, take the following action:
Step 1 Check the tx and rx connection of the uplink port.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricEthLanPcEpUdldLinkDown
UDLD state for ether port [slotId]/[aggrPortId]/[portId] on fabric interconnect [switchId] is: [udldOperState].UDLD state for ether port [slotId]/[portId] on fabric interconnect [switchId] is: [udldOperState].
This fault occurs when an ethernet uplink port-channel member is unidirectional connected.
If you see this fault, take the following action:
Step 1 Check the tx and rx connection of the uplink port.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentChassisInvalid-fru
Chassis [id] has a empty value for FRU identity reported by CMC.
This fault typically occurs when the FRU information for a chassis has empty value.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardRemoved
[side] FI IOM [chassisId]/[id] ([switchId]) is removed
This fault typically occurs because an FI I/O module is removed from the chassis. In a cluster configuration, the chassis fails over to the other FI I/O module. For a standalone configuration, the chassis associated with the FI I/O module loses network connectivity. This is a critical fault because it can result in the loss of network connectivity and disrupt data traffic through the FI I/O module.
If you see this fault, take the following actions:
Step 1 Reinsert the FI I/O module and configure the fabric-interconnect ports connected to it as server ports and wait a few minutes to see if the fault clears.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardThermalProblem
[side] FI IOM [chassisId]/[id] ([switchId]) operState: [operState]
This fault occurs when there is a thermal problem on an FI I/O module. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the FI I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the FI I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace faulty FI I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardThermalThresholdNonCritical
[side] FI IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an FI I/O module has exceeded a non-critical threshold value, but is still below the critical threshold. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the FI I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and FI I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and FI I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 8 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardThermalThresholdCritical
[side] FI IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an FI I/O module has exceeded a critical threshold value. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the FI I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and FI I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and FI I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty FI I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardThermalThresholdNonRecoverable
[side] FI IOM [chassisId]/[id] ([switchId]) temperature: [thermal]
This fault occurs when the temperature of an FI I/O module has been out of the operating range, and the issue is not recoverable. Be aware of the following possible contributing factors:
- Temperature extremes can cause Cisco FPR equipment to operate at reduced efficiency and cause a variety of problems, including early degradation, failure of chips, and failure of equipment. In addition, extreme temperature fluctuations can cause CPUs to become loose in their sockets.
- Cisco FPR equipment should operate in an environment that provides an inlet air temperature not colder than 50F (10C) nor hotter than 95F (35C).
- If sensors on a CPU reach 179.6F (82C), the system will take that CPU offline.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the FI I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and FI I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and FI I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty FI I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardIdentity
[side] FI IOM [chassisId]/[id] ([switchId]) has a malformed FRU
This fault typically occurs when the FRU information for an FI I/O module is corrupted or malformed.
If you see this fault, take the following actions:
Step 1 Verify that the capability catalog in Cisco FPR Manager is up to date. If necessary, update the catalog.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchIOCardCpuThermalThresholdCritical
[side] FI IOM [chassisId]/[id] ([switchId]) processor temperature exceeded the limit
This fault typically occurs when the processor temperature in FI-IOM exceeds the limit.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine the temperature operating range of the FI I/O module.
Step 2 Review the Cisco FPR Site Preparation Guide to ensure the chassis and FI I/O modules have adequate airflow, including front and back clearance.
Step 3 Verify that the air flows on the Cisco FPR chassis and FI I/O module are not obstructed.
Step 4 Verify that the site cooling system is operating properly.
Step 5 Power off unused blade servers and rack servers.
Step 6 Clean the installation site at regular intervals to avoid buildup of dust and debris, which can cause a system to overheat.
Step 7 Replace the faulty FI I/O modules.
Step 8 Use the Cisco FPR power capping capability to limit power usage. Power capping can limit the power consumption of the system, including blade and rack servers, to a threshold that is less than or equal to the system’s maximum rated power. Power-capping can have an impact on heat dissipation and help to lower the installation site temperature.
Step 9 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltPowerBudgetChassisPsuMixedMode
Chassis [id] has a mix of high-line and low-line PSU input power sources.
This fault occurs when there is a mix of high-line and low-line PSU input power source.
If you see this fault, change all the PSU input power sources to have same mode
fltNetworkElementRemoved
Fabric Interconnect [id] operability: [operability]
This fault occurs when the fabric interconnect is removed in a clustering setup.
If you see this fault, take the following actions:
Step 1 Reinsert the removed fabric interconnect back into the chassis (applicable to FPR-Mini only).
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltNetworkOperLevelExtrasecondaryvlans
Fabric Interconnect [id]: Number of secondary vlans exceeds the max limit on the FI: Number of secondary vlans: [secondaryVlanCount] and Max secondary vlans allowed: [maxSecondaryVlanCount]
This fault occurs when the fabric interconnect has more number of secondary vlans than what is supported.
If you see this fault, take the following actions:
Step 1 It is recommended that operator should delete the extra secondary vlans that are there in the FI. System may appear to be normally functioning even with these extra secondary vlans in place. However there may be performance issues observed as the system is operating above the recommended scale limits..
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltSwVlanExtrasecondaryvlansperprimary
Number of secondary vlans associated with the primary vlan [id] in Fabric Interconnect [switchId] exceeds the max limit: Number of secondary vlans: [secVlanPerPrimaryVlanCount] and Max secondary vlans allowed in a primary vlan: 30
This fault occurs when the fabric interconnect has more number of secondary vlans per primary vlan than what is supported.
If you see this fault, take the following actions:
Step 1 It is recommended that operator should delete the extra secondary vlans on this primary vlan that are there in the FI. System may appear to be normally functioning even with these extra secondary vlans on this primary vlan in place. However there may be performance issues observed as the system is operating above the recommended scale limits..
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtBackupPolicyConfigConfiguration backup outdated
This fault occurs when last backup configuration is taken long back
If you see this fault, take the following actions:
Step 1 Please take a configuration backup
fltFirmwareStatusCimcFirmwareMismatch
Aggregate blade CIMC firmware mismatch. Firmware: [cimcVersion]
This fault typically occurs when the CIMC firmware image on master and slave node in an aggregate blade does not match.
Update and activate master and slave CIMC to same firmware version.
fltFirmwareStatusPldFirmwareMismatch
Aggregate blade board controller firmware mismatch. Firmware: [pldVersion]
This fault typically occurs when the board controller firmware image on master and slave node in an aggregate blade does not match.
Update master and slave board controller to same firmware version.
fltVnicEtherVirtualization-netflow-conflict
Netflow and VMQ/SRIOV-USNIC policies cannot be assigned to the same Eth vNIC
This fault typically occurs when a netflow src vnic is made a USNIC or VMQ vnic
If you see this fault, take the following actions:
Step 1 Remove the vnic from a netflow session or remove the usnic/vmq policy
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltSysdebugLogExportStatusLogExportFailure
Log export to remote server failed from [switchId]:[exportFailureReason]
This fault occurs when Cisco Firepower Manager cannot transfer a log file to a remote server. This is typically the result of one of the following issues:
- The remote server is not accessible.
- One or more of the parameters for the remote server that are specified for the log export target, such as path, username, password, ssh-key and server name, are incorrect.
If you see this fault, take the following actions:
Step 1 Verify the connectivity to the remote server.
Step 2 Verify the path information of the remote server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsServerSvnicNotPresent
Service profile [name] does not contain service vnics for netflow.
The service profile does not have service vnics, hence netflow will not function on this server. This fault typically occurs as a result of one of the following issues:
- Service profile has maximum number of vnics already created, hence cannot accomodate service vnics required for netflow.
If you see this fault, take the following actions:
Step 1 If you have already enabled netflow, please reduce the number of vnics on the SP to accomodate service vnics.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltLsIssuesKvmPolicyUnsupported
Kvm mgmt policy not supported by current CIMC version
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltComputeABoardThermalProblem
Motherboard [faultQualifier] of server [chassisId]/[slotId] (service profile: [assignedToDn]) thermal: [thermal]Motherboard of server [id] (service profile: [assignedToDn]) thermal: [thermal]
This fault typically occurs when the motherboard thermal sensors on a server detect a problem.
If you see this fault, take the following actions:
Step 1 Verify that the server fans are working properly.
Step 2 Wait for 24 hours to see if the problem resolves itself.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltComputeABoardPowerUsageProblem
Motherboard [faultQualifier] of server [chassisId]/[slotId] (service profile: [assignedToDn]) powerUsage: [powerUsage]Motherboard of server [id] (service profile: [assignedToDn]) powerUsage: [powerUsage]
This fault typically occurs when the motherboard power consumption exceeds certain threshold limits. At that time the power usage sensors on a server detect a problem.
If you see this fault, take the following actions:
Step 1 Create a show tech-support file and contact Cisco TAC.
fltComputeABoardMotherBoardVoltageThresholdUpperNonRecoverable
Motherboard input voltage(12V/5V/3V) in server [id] is [voltage]Motherboard [faultQualifier] input voltage(12V/5V/3V) in server [chassisId]/[slotId] is [voltage]
This fault is raised when one or more motherboard input voltages has become too high and is unlikely to recover.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltComputeABoardMotherBoardVoltageThresholdLowerNonRecoverable
Motherboard input voltage(12V/5V/3V) in server [id] is [voltage]Motherboard [faultQualifier] input voltage(12V/5V/3V) in server [chassisId]/[slotId] is [voltage]
This fault is raised when one or more motherboard input voltages has dropped too low and is unlikely to recover.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltComputeABoardMotherBoardVoltageUpperThresholdCritical
Motherboard input voltage(12V/5V/3V) in server [id] is [voltage]Motherboard [faultQualifier] input voltage(12V/5V/3V) in server [chassisId]/[slotId] is [voltage]
This fault is raised when one or more motherboard input voltages has crossed upper critical thresholds.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltComputeABoardMotherBoardVoltageLowerThresholdCritical
Motherboard input voltage(12V/5V/3V) in server [id] is [voltage]Motherboard [faultQualifier] input voltage(12V/5V/3V) in server [chassisId]/[slotId] is [voltage]
This fault is raised when one or more motherboard input voltages has crossed lower critical thresholds.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCimcvmediaActualMountEntryVmediaMountFailed
Server [chassisId]/[slotId] (service profile: [assignedToDn]) vmedia mapping [mappingName] has failed.Server [id] (service profile: [assignedToDn]) vmedia mapping [mappingName] has failed.
If you see this fault, take the following actions:
Step 1 Check the mount related details(remote server ip, port, path & file is reachable) and reack the server.
fltFabricVlanPrimaryVlanMissingForIsolated
Primary Vlan can not be resolved for isolated vlan [name]
This fault typically occurs when Cisco FPR Manager encounters a problem resolving the primary VLAN ID corresponding to a particular isolated VLAN.
If you see this fault, take the following actions:
Step 1 Associate the isolated VLAN with a valid primary VLAN.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricVlanPrimaryVlanMissingForCommunity
Primary Vlan can not be resolved for community vlan [name]
This fault typically occurs when Cisco FPR Manager encounters a problem resolving the primary VLAN ID corresponding to a particular community VLAN.
If you see this fault, take the following actions:
Step 1 Associate the community VLAN with a valid primary VLAN.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricVlanMismatch-a
VLAN [name] has [overlapStateForA] with another vlan under lan-cloud/appliance-cloud for the fabric interconnect A
This fault typically occurs when private vlan properties of VLAN under one cloud conflicts with the private vlan properties of VLAN under another cloud for the fabric interconnect A. The cloud here means either a LAN cloud or an appliance cloud. This issue can stop the usage of this vlan.
If you see this fault, take the following action:
Step 1 Check the sharing property of the VLAN under both clouds and fabric A referred by its VLAN ID.
Step 2 If the sharing property of the VLAN does not match with the VLAN on the other cloud, then change the sharing property of either of the VLANs, so that it matches with each other.
Step 3 If the VLAN is a isolated/community vlan, check the pubnwname property of the VLAN under both clouds referred by its VLAN ID.
Step 4 If the pubnwname property of the isolated/community VLAN does not match with the isolated/community VLAN on the other cloud, then change the pubnwname property of either of the VLANs, so that it matches with each other.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricVlanMismatch-b
VLAN [name] has [overlapStateForB] with another vlan under lan-cloud/appliance-cloud for the fabric interconnect B
This fault typically occurs when private vlan properties of VLAN under one cloud conflicts with the private vlan properties of VLAN under another cloud for the fabric interconnect B. The cloud here means either a LAN cloud or an appliance cloud. This issue can stop the usage of this vlan.
If you see this fault, take the following action:
Step 1 Check the sharing property of the VLAN under both clouds and fabric B referred by its VLAN ID.
Step 2 If the sharing property of the VLAN does not match with the VLAN on the other cloud, then change the sharing property of either of the VLANs, so that it matches with each other.
Step 3 If the VLAN is a isolated/community vlan, check the pubnwname property of the VLAN under both clouds referred by its VLAN ID.
Step 4 If the pubnwname property of the isolated/community VLAN does not match with the isolated/community VLAN on the other cloud, then change the pubnwname property of either of the VLANs, so that it matches with each other.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltFabricVlanErrorAssocPrimary
VLAN [name] is in error state because the associated primary vlan [assocPrimaryVlanState]
This fault typically occurs when there is an error in associated primary vlan of a secondary VLAN. This issue can stop the usage of this vlan.
If you see this fault, take the following action:
Step 1 Check the pubnwname property of the VLAN.
Step 2 If the pubnwname is not given or refers to a non-existing primary vlan, give a name of a primary vlan which is in good state.
Step 3 If the pubnwname refers to a vlan which is not a primary vlan, then either change the referred vlan to be a primary vlan or give a different primary vlan.
Step 4 If the pubnwname refers to a valid primary vlan, then check the state of the primary VLAN.
Step 5 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageMezzFlashLifeConfiguration-error
Flash Life on server [chassisId]/[slotId] flashStatus: [flashStatus]
This fault occurs when FPRM is not able to retrieve the Fusion-io life left due to an error.
If you see this fault, take the following actions:
Step 1 Upgrade Fusion-io Firmware.
Step 2 If the above action did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltStorageMezzFlashLifeDegraded
Flash Life on server [chassisId]/[slotId] flashStatus: [flashStatus]
This fault occurs when the Fusion-io life left is 10 percent or less.
If you see this fault, take the following actions:
Step 1 Continue to monitor the the Fusion-io life left and if it reaches 0 percent, the adapter might revert to read-only.
fltStorageFlexFlashControllerMismatch
FlexFlash Controller [id] on server [chassisId]/[slotId] has SD cards with different sizes.FlexFlash Controller [id] on server [id] has SD cards with different sizes.
This fault occurs when the flexflash SD Cards dont match in size.
If you see this fault, take the following action:
Step 1 Remove one of the existing cards and replace it with another card that has the same size as the unremoved one.
fltStorageFlexFlashDriveUnhealthy
FlexFlash Drive [id] on server [chassisId]/[slotId] is unhealthy. Reason: [operQualifierReason] Status: [operationState]FlexFlash Drive [id] on server [id] is unhealthy. Reason: [operQualifierReason] Status: [operationState]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltStorageFlexFlashCardUnhealthy
FlexFlash Card [slotNumber] on server [chassisId]/[slotId] is unhealthy. Reason: [cardHealth]FlexFlash Card [slotNumber] on server [id] is unhealthy. Reason: [cardHealth]
This fault occurs when the flexflash card is unhealthy.
If you see this fault, take the following action:
Step 1 Re-acknowledge the server by setting the flexflash scrub policy to yes. Please note that this action will erase all data in the card(s)
Step 2 Verify the health of the card. If the above action did not resolve the issue, replace the card
fltMgmtInterfaceNamedInbandVlanUnresolved
This fault occurs if there is an issue in Inband interface configuration.
If you see this fault check if the VLAN configured on Inband IP is created and the VLAN is present in the Inband Profile or IP address is configured
fltMgmtInterfaceInbandUnsupportedServer
This fault occurs if there is an issue in Inband interface configuration.
If you see this fault check if the VLAN configured on Inband IP is created and the VLAN is present in the Inband Profile or IP address is configured
fltMgmtInterfaceInbandUnsupportedFirmware
This fault occurs if there is an issue in Inband interface configuration.
If you see this fault check if the VLAN configured on Inband IP is created and the VLAN is present in the Inband Profile or IP address is configured
fltComputePhysicalAdapterMismatch
Server [id] (service profile: [assignedToDn]) has invalid adapter combinatonServer [chassisId]/[slotId] (service profile: [assignedToDn]) has invalid adapter combination
This fault typically occurs because Cisco FPR Manager has detected that the server has an invalid combination of Cisco VICs.
If you see this fault, take the following actions:
Step 1 Verify that the valid adapter combinations are installed configuration.
Step 2 Reacknowledge the server.
Step 3 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltEquipmentSwitchCardAct2LiteFail
Failed Identification Test in slot - [id] ([descr]). The module in this slot may not be a genuine Cisco product. Cisco warranties and support programs only apply to genuine Cisco products. If Cisco determines that your insertion of non-Cisco modules into a Cisco productvis the cause of a support issue, Cisco may deny support under your warranty or under a Cisco support program such as SmartNet.
This fault occurs when the ACT2 chip fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltEquipmentTpmSlaveTpm
Server [chassisId]/[slotId], has a Tpm present on the Slave Board.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltPoolElementDuplicatedAssigned
ID is duplicated assigned for multiple servers(Check FPRC for details)
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSwVlanPortNsResourceStatusWarning
Total Available Vlan-Port Count on switch [switchId] is below 10%
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNetworkElementMemoryerror
Fabric Interconnect [id] memory less than expected! Total Memory: [totalMemory] and Expected Memory: [expectedMemory]
This fault occurs when the total memory on FI is less than expected.
If you see this fault, take the following actions:
Step 1 You will need to do a manual physical inspection of the DIMMs on the FI. Try removing and reinserting the DIMMs, and verify the Total Memory. If this does not resolve the issue, one of the DIMMs has gone bad and needs to be replaced.
Step 2 If the above actions did not resolve the issue, create a show tech-support file and contact Cisco TAC.
fltMgmtPmonEntryFPRM process failure
FPRM process [name] failed on FI [switchId]
This fault occurs in an unlikely event of a Cisco FPR Manager process crash. Typically, the failed process restarts and recovers from the problem. Any pending operations are restarted after the process successfully restarts.
If you see this fault and the process does not restart successfully, create a show tech-support file and contact Cisco TAC.
fltSmSlotSmaHeartbeat
Security module [slotId] - network adapter 1 is not responding
This fault occurs when a slot is not operationally up.
If you see this fault, take the following actions:
Step 1 Reboot the Blade associated with the Slot
fltSmSlotBladeNotWorking
Security Module [slotId] is in failed state. Error: [errorMsg]
This fault occurs when a blade discovery is failed or service profile association is failed.
If you see this fault, take the following actions:
Step 1 Reboot the blade associated with the slot
fltSmSlotDiskFormatFailed
Disk format is failed on slot [slotId]
This fault occurs when a blade disk formatting is failed.
If you see this fault, take the following actions:
Step 1 Reformat disk or need disk replacement
fltSmSlotBladeSwap
Blade swap detected on slot [slotId]
This fault occurs during the blade swap.
If you see this fault, take the following action:
Step 1 1. Insert the correct blade
fltOsControllerFailedBladeBootup
Slot [slotId], boot up failed - recovery in progress
This fault occurs when blade failed to boot up.
If you see this fault, do nothing because the blade will try to recover
Step 1 Reboot the Blade associated with the Slot
fltOsControllerFailedBootupRecovery
Slot [slotId], boot up failed - exceeded max number of retries
This fault occurs when blade failed to boot up.
If you see this fault, do the following:
Step 1 Reboot the Blade associated with the Slot
fltFirmwarePlatformPackBundleVersionMissing
Platform version is empty in platform firmware package
This fault typically occurs when the platform version is not set.
If you see this fault, take the following actions:
Step 1 In the CLI, under scope org/fw-platform-pack, set the platform-bundle-vers to a desired or expected running platform version.
fltSmSecSvcSwitchConfigFail
Switch configuration failed for Logical Device. Error: [switchErrorMsg]
This fault occurs when switch configuration fails for a LogicalDevice.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmLogicalDeviceIncompleteConfig
Logical Device [name] is not configured correctly. [errorMsg]
This fault occurs when a logical device is not configured correctly.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmLogicalDeviceLogicalDeviceError
Error in Logical Device [name]. [errorMsg]
This fault occurs when a logical device is in a non-terminal error state.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltEtherFtwPortPairBypass
Port-pair [portName]-[peerPortName] in switch-bypass mode
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCommDateTimeCommNtpConfigurationFailed
Ntp Configuration failed, please check the error message in Ntp host
This fault typically occurs because all Ntp configuration failed adn the system is out of sync.
If you see this fault, take the following actions:
Step 1 Verify that whether at least one Ntp configuration succeeded.
fltSmConfigIssueLogicalDeviceConfigError
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppAppImageCorrupted
The application image [appId] is corrupted
This fault occurs when an application meta data cannot be reloaded.
If you see this fault, take the following actions:
Step 1 Re-download the application from a trusted source
fltEquipmentXcvrNonSupportedXcvr
The transceiver inserted in port Ethernet [slotId]/[aggrPortId]/[portId] is not a Cisco product. Cisco warranties and support programs only apply to genuine Cisco products. If Cisco determines that your insertion of non-Cisco modules into a Cisco product is the cause of a support issue, Cisco TAC reserves the right to deny support
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFabricSspEthMonDelAllSessEnabled
Packet Capture Session [name] was still enabled when delete-all-sessions was issued
This fault occurs when user issues the delete-all-sessions command when one of the packet capture sessions is still enabled
If you see this fault, take the following actions:
Step 1 Disable the enabled session
Step 2 Retry the delete-all-sessions command
fltIpsecConnectionIpsecConnInvalidKey
Invalid keyring [keyring] for IPSec connection [name]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltIpsecConnectionIpsecConnInvalidCert
Invalid Cert of keyring [keyring] for IPSec connection [name]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltIpsecAuthorityIpsecAuthorInvalidTp
Invalid trustpoint [tpName] for IPSec
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmHotfixHotfixInstallFailed
Failed to install Hotfix [version] on [appName]-[identifier] in slot [slotId]. Error: [errorMsg]
This fault occurs when hotfix installation fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmHotfixHotfixError
Error in Hotfix [version] on appInstance [appName]-[identifier] in slot [slotId]. Error: [errorMsg]
This fault occurs when hotfix is in a non-terminal error state.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmErrorError
[operStr] failed on slot [slotId]:[errorMsg]
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmCloudConnectorCloudRegistrationFailed
Failed to register the device with the cloud. Error: [errorMessage]
This fault occurs when registration of device with cloud fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmCloudConnectorCloudUnregistrationFailed
Failed to unregister the device with the cloud. Error: [errorMessage]
This fault occurs when unregistration of device with cloud fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmUnsignedCspLicenseUnsignedCSPLicenseInstalled
Unsigned CSP License Installed [licenseFileName]
This fault occurs when Unsigned CSP License is installed on the system.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSdLinkVnicConfigFail
Failed to set the oper state for vnic(s) [failedCfgVnics].
This fault occurs when the vnic for this link failed to be configured.
If you see this fault, take the following actions:
Step 1 Failed to set the oper state for vnic(s)
fltNwctrlCardConfigOffline
Network Module [slotId] taken offline by user. Please check audit-logs for user activity.
This fault occurs when the switch card is powered down.
If you see this fault, create a show tech-support file and contact Cisco TAC.
fltNwctrlCardConfigFailed
Network Module [slotId] is in failed state. If new hardware is inserted, please ensure proper firmware is installed. Otherwise, please collect the detailed FPRM techsupport from the local-mgmt shell and contact Cisco.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigError
Network Module [slotId] is in error state. If new hardware is inserted, please ensure proper firmware is installed. Otherwise, please collect the detailed FPRM techsupport from the local-mgmt shell and contact Cisco.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigOirFailed
Network Module [slotId] is in failed state. Hot swap with a different type of module is not supported. Please reboot system.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigOirInvalid
Network Module [slotId] is in failed state. Hot swap of this type of module is not supported. Please reboot system.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigRemoval
Network Module [slotId] removed. Please re-insert module or use acknowledge command to confirm module removal.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigMismatch
Network Module [slotId] is of different type than previously inserted module in this slot. Please use acknowledge command to confirm module replacement.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltNwctrlCardConfigSupriseRemoval
Network Module [slotId] removed when in online state. It is recommended to set module offline before removal. Please re-insert module or use acknowledge command to confirm module removal.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFirmwareRunnableAdapterUpgradeRequired
Adapter [id] on Security Module [slotId] requires a critical firmware upgrade. Please see Adapter Bootloader Upgrade instructions in the FXOS Release Notes posted with this release.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmClusterBootstrapCclSubnetNotSupported
Customization Cluster Control Link Subnet is not supported by the application
This fault occurs when Ccl Subnet is not in default value when customization not supported by application.
If you see this fault, take the following actions:
Step 1 Upgrade application or set ccl Network to 0.0.0.0
fltSmAppInstanceFailedConversion
Unrecoverable error during conversion of App Instance [appName]-[startupVersion] on slot [slotId] during FXOS upgrade
This fault occurs if we could not automatically convert smAppInstance to smAppInstance2 during upgrade to Fairlop
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppNotResponding
App Instance [appName]-[identifier] with version [runningVersion] on slot [slotId] is not responding
This fault occurs when an app instance is not responding.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppInstallFailed
Failed to install App Instance [appName]-[identifier] with version [startupVersion] on slot [slotId]. Error: [errorMsg]
This fault occurs when an app instance installation fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppStartFailed
Failed to start App Instance [appName]-[identifier] with version [runningVersion] on slot [slotId]. Error: [errorMsg]
This fault occurs when an app instance start fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppUpdateFailed
Failed to update App Instance [appName]-[identifier] with version [startupVersion] on slot [slotId]. Error: [errorMsg]
This fault occurs when an app instance updation fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppStopFailed
Failed to stop App Instance [appName]-[identifier] with version [runningVersion] on slot [slotId]. Error: [errorMsg]
This fault occurs when an app instance stop fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppNotInstalled
App Instance [appName]-[identifier] with version [startupVersion] on slot [slotId] is not installed. Error: [errorMsg]
This fault occurs when an app instance is not installed.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppInstanceError
Error in App Instance [appName]-[identifier] with version [startupVersion] on slot [slotId]. [errorMsg]
This fault occurs when an app instance is in a non-terminal error state.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2AppInstanceUnsupported
App Instance [appName]-[identifier] with version [startupVersion] on slot [slotId] is not supported in the current bundle. Error: [errorMsg]
This fault occurs when an app instance is not supported in the current platform bundle
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmAppInstance2SoftwareIncompatible
This fault occurs when this main app version is not compatible with decorator version or this decorator version is not compatible with main app version.
If you see this fault, take the following actions:
Step 1 Remove data port decorator from logical device
fltNetworkElementSamconfig
The Supervisor’s sam.config file stored in the /opt partition is not accessible
This fault occurs when the Supervisor is not able to access the persistent store of the sam.config file. Attempts at modifying the admin password, Supervisor OOB IPv4/6 addresses, DNS server, and strong password enforcement may fail.
If you see this fault in a non-Cleared state, take the following actions:
Step 1 Create a show tech-support fprm detail file and copy it to a remote loaction.
Step 2 Backup the existing configuration using the export-config feature and copy it to a remote location.
fltSmAppInstance2AppFaultState
AppInstance [appName]-[identifier] with version [runningVersion] on slot [slotId] is in failed state. Error: [errorMsg]
This fault occurs when AppInstance is in "fault" state.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSmExternalPortLinkConflictConfig
The external-port-link [name] conflict with the application. [errorDescription]. To correct it, synchronize and remove the conflict sub-interface, save and deploy from Firepower Management Center. Remove and recreate external-port-link [name] from MIO, then, synchronize it again from Firepower Management Center.
This fault occurs when an external-port-link has conflict with the application.
If you see this fault, take the either of following two actions:
Step 1 Delete the conflict external-port-link from MIO, and use another sub-interface to create a new external-port-llink
Step 2 Sync in Firepower Management Center to get the new sub-interface
Step 1 Delete the conflict sub-interface from Firepower Management Center
Step 2 Save and deploy the changes from Firepower Management Center
Step 3 Delete the conflict external-port-link and recreate it again from MIO
Step 4 Sync again in Firepower Management Center to get the new sub-interface
fltSmSlotAdapter2NotResponding
This fault occurs if adapter 2 is not responding to heartbeats.
If you see this fault, take the following actions:
Step 1 Reboot the security module associated with the slot
fltSmHwCryptoHwCryptoNotOperable
Hardware crypto is enabled but not operable on [appName]-[identifier]. Reason: HwCryptoVersion is ’[hwCryptoVersion]’.
This fault occurs when admin state of hardware crypto is set to enabled but the instance does not support it, or there is a failure when retrieving the hardware crypto version.
If you see this fault, take the following actions:
Step 1 Set admin state of hardware crypto to ’disabled’ to free the hardware crypto resource on the application instance
fltPkiKeyRingEc
[name] Keyring’s ECDSA elliptic-curve is invalid.
This fault occurs when ECDSA keyring is created without elliptic-curve set.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCommTelemetryTelemetryRegistrationFailed
Auto registration of device for telemetry failed. Error: [errorMessage]
This fault occurs when registration of device with cloud fails for telemetry.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCommTelemetryTelemetryUnregistrationFailed
Failed to unregister the device with the cloud. Error: [errorMessage]
This fault occurs when unregistration of device with cloud fails.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCommTelemetryTelemetryGetDataFailed
Failed to get telemetry data from application. Error: [errorMessage]
This fault occurs when there is a failure to get telemetry data from the application.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltCommTelemetryTelemetrySendDataFailed
Failed to send telemetry data. Error: [errorMessage]
This fault occurs when there is a failure to send telemetry data.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltAaaUserEpPasswordEncryptionKeyNotSet
The password encryption key has not been set.
This fault typically occurs because a password encryption key is not set on the system. The password encryption key is used for protecting credentials when a user exports the system’s configurations
If you see this fault, take the following actions:
Step 1 Scope to security, and run the command "set password-encryption-key".
fltSdInternalMgmtBootstrapInternalMgmtVnicConfigFail
Failed to allocate internal mgmt vnic for application instance [appName]-[identifier] on slot [slotId]
This fault occurs when vnic for internal mgmt bootstrap (in decorator case) was not allocated.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSdExternalLduLinkExternalLduLinkVnicConfigFail
Failed to allocate vnic for ExternalLduLink [name] in LogicalDevice [ldName](type:[type]) for [appName] on slot [slotId]
This fault occurs when vnic for ExternalLduLink was not allocated.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSdAppLduLinkAppLduLinkEndpoint1VnicConfigFail
Failed to allocate vnic for AppLduLink [name] in LogicalDevice [ldName](type:[type]) for [appName] on slot [slotId]
This fault occurs when vnics for EndPoint1(decorator) in AppLduLink are not allocated.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSdAppLduLinkAppLduLinkEndpoint2VnicConfigFail
Failed to allocate vnic for AppLduLink [name] in LogicalDevice [ldName](type:[type]) for main app on slot [slotId]
This fault occurs when vnics for EndPoint2(main app) in AppLduLink are not allocated.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltSdPreAllocatedVnicVnicPreAllocationFail
Failed to pre-allocate vnic for application instance [appName]-[identifier] on slot [slotId] (type:[portType])
This fault occurs when vnics are not pre-allocated for native/container FTD instances.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFirmwareVersionIssueImageVersionMismatch
Mismatched [mismatchType] image version [version] detected. Expected version [installedImageVersion] from FXOS [installedPackageVersion].
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltFabricComputeSlotEpBladeDecommissionFail
Service Module [slotId] - Decommission failed, reason: [failReason]
This fault occurs when Cisco FPR Manager failed to decommission the blade.
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltEtherFtwPortPairPhyBypass
Port-pair [portName]-[peerPortName] in phy-bypass mode due to watchdog timeout
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltEtherFtwPortPairPhyBypassErr
Port-pair [portName]-[peerPortName] in phy-bypass mode due to switch config error
Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the issue using the tools and utilities provided at http://www.cisco.com/tac. If you cannot resolve the issue, create a show tech-support file and contact Cisco Technical Support.
fltMgmtImporterConfiguration import failed
Importing configuration failed: [errMsg]
This fault occurs if error happens when importing a configuration file
If you see this fault, please take actions based on the error description.