Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Re: ib_srp disconnect on ESXi 5.0 with H:0x5 D:0x0 P:0x0 error

$
0
0

Hello xgrv,

thanks for your reply. You are Crazy ^^ i will test it with fio in the next days... I use a 3 Solaris ZFS Targets with IB... for 4 ESXi 5.5 Server. My parameter looks like

 

~ #  esxcli system module parameters list -m ib_srp

Name                  Type  Value  Description                                                                  

--------------------  ----  -----  ------------------------------------------------------------------------------

dead_state_time       int   5      Number of minutes a target can be in DEAD state before moving to REMOVED state

debug_level           int          Set debug level (1)                                                          

heap_initial          int          Initial heap size allocated for the driver.                                  

heap_max              int          Maximum attainable heap size for the driver.                                 

max_srp_targets       int          Max number of srp targets per scsi host (ie. HCA)                            

max_vmhbas            int   1      Maximum number of vmhba(s) per physical port (0<x<8)                         

mellanox_workarounds  int          Enable workarounds for Mellanox SRP target bugs if != 0                      

srp_can_queue         int   1024   Max number of commands can queue per scsi_host ie. HCA                       

srp_cmd_per_lun       int   64     Max number of commands can queue per lun                                     

srp_sg_tablesize      int   32     Max number of scatter lists supportted per IO - default is 32                

topspin_workarounds   int          Enable workarounds for Topspin/Cisco SRP target bugs if != 0                 

use_fmr               int          Enable/disable FMR support (1)

 

What did you mean with ""H:0x5 D:0x0 P:0x0 error" was corrected by optimizing the latency" ???

I think i had a similar Problem with FC Cards in ESXi... That was the reason why we change to Infiniband and it is great. Hopefully we found a solution for this bad problem...

Thank you very much xgrv... If i can help you, i will do what i can...

Thanks

Thomas


Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>