- Notifications
You must be signed in to change notification settings - Fork7
OPI gRPC to Marvell bridge third party repo
License
opiproject/opi-marvell-bridge
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is a Marvell app (bridge) to OPI APIs for storage, inventory, ipsec and networking (future).
This project welcomes contributions and suggestions. We are happy to have the Community involved via submission ofIssues and Pull Requests (with substantive content or even just fixes). We are hoping for the documents, test framework, etc. to become a community process with active engagement. PRs can be reviewed by by any number of people, and a maintainer may accept.
SeeCONTRIBUTING andGitHub Basic Process for more details.
build like this:
go build -v -o /opi-marvell-bridge ./cmd/...
import like this:
import"github.com/opiproject/opi-marvell-bridge/pkg/frontend"
Before initiating the bridge, theRedis andJaeger services must be operational. To specify non-standard ports for these services, use the--help
command with the binary to find out which parameters needs to be passed.
on DPU/IPU (i.e. with IP=10.10.10.1) run
$ docker run --rm -it -v /var/tmp/:/var/tmp/ -p 50051:50051 ghcr.io/opiproject/opi-marvell-bridge:main2023/09/12 20:29:05 Connection to SPDK will be via: unix detected from /var/tmp/spdk.sock2023/09/12 20:29:05 gRPC server listening at [::]:500512023/09/12 20:29:05 HTTP Server listening at 8082
on X86 management VM run
reflection
$ docker run --network=host --rm -it namely/grpc-cli ls --json_input --json_output localhost:50051grpc.reflection.v1alpha.ServerReflectionopi_api.inventory.v1.InventorySvcopi_api.security.v1.IPsecopi_api.storage.v1.AioVolumeServiceopi_api.storage.v1.FrontendNvmeServiceopi_api.storage.v1.FrontendVirtioBlkServiceopi_api.storage.v1.FrontendVirtioScsiServiceopi_api.storage.v1.MiddleendServiceopi_api.storage.v1.NvmeRemoteControllerServiceopi_api.storage.v1.NullVolumeService
full test suite
docker run --rm -it --network=host docker.io/opiproject/godpu:main inventory get --addr="10.10.10.10:50051"docker run --rm -it --network=host docker.io/opiproject/godpu:main storagetest --addr="10.10.10.10:50051"docker run --rm -it --network=host docker.io/opiproject/godpu:main ipsectest --addr=10.10.10.10:50151 --pingaddr=8.8.8.1"
run either gRPC or HTTP requests
# gRPC requestsdocker run --network=host --rm -it namely/grpc-cli ls --json_input --json_output 10.10.10.10:50051 -ldocker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeSubsystem"{nvme_subsystem : {spec : {nqn: 'nqn.2022-09.io.spdk:opitest2', serial_number: 'myserial2', model_number: 'mymodel2', max_namespaces: 11} }, nvme_subsystem_id : 'subsystem2' }"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeSubsystems"{}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeSubsystem"{name : 'nvmeSubsystems/subsystem2'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeController"{parent: 'nvmeSubsystems/subsystem2', nvme_controller : {spec : {nvme_controller_id: 2, pcie_id : {physical_function : 0, virtual_function : 0, port_id: 0}, max_nsq:5, max_ncq:5, 'trtype': 'NVME_TRANSPORT_TYPE_PCIE' } }, nvme_controller_id : 'controller1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeControllers"{parent : 'nvmeSubsystems/subsystem2'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeController"{name : 'nvmeSubsystems/subsystem2/nvmeControllers/controller1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeNamespace"{parent: 'nvmeSubsystems/subsystem2', nvme_namespace : {spec : {volume_name_ref : 'Malloc0', 'host_nsid' : '10', uuid:{value : '1b4e28ba-2fa1-11d2-883f-b9a761bde3fb'}, nguid: '1b4e28ba-2fa1-11d2-883f-b9a761bde3fb', eui64: 1967554867335598546 } }, nvme_namespace_id: 'namespace1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeNamespaces"{parent : 'nvmeSubsystems/subsystem2'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeNamespace"{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 StatsNvmeNamespace"{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeRemoteController"{nvme_remote_controller : {multipath: 'NVME_MULTIPATH_MULTIPATH'}, nvme_remote_controller_id: 'nvmetcp12'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeRemoteControllers"{}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeRemoteController"{name: 'nvmeRemoteControllers/nvmetcp12'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmePath"{parent: 'nvmeRemoteControllers/nvmetcp12', nvme_path : {traddr:'11.11.11.2', trtype:'NVME_TRANSPORT_TYPE_TCP', fabrics:{subnqn:'nqn.2016-06.com.opi.spdk.target0', trsvcid:'4444', adrfam:'NVME_ADDRESS_FAMILY_IPV4', hostnqn:'nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c'}}, nvme_path_id: 'nvmetcp12path0'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeRemoteController"{nvme_remote_controller : {multipath: 'NVME_MULTIPATH_DISABLE'}, nvme_remote_controller_id: 'nvmepcie13'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmePath"{parent: 'nvmeRemoteControllers/nvmepcie13', nvme_path : {traddr:'0000:01:00.0', trtype:'NVME_TRANSPORT_TYPE_PCIE'}, nvme_path_id: 'nvmepcie13path0'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmePaths"{parent : 'nvmeRemoteControllers/nvmepcie13'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmePath"{name: 'nvmeRemoteControllers/nvmepcie13/nvmePaths/nvmepcie13path0'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeRemoteController"{name: 'nvmeRemoteControllers/nvmepcie13'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmePath"{name: 'nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmePath"{name: 'nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeRemoteController"{name: 'nvmeRemoteControllers/nvmetcp12'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeNamespace"{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeController"{name : 'nvmeSubsystems/subsystem2/nvmeControllers/controller1'}"docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeSubsystem"{name : 'nvmeSubsystems/subsystem2'}"
# HTTP requests# inventorycurl -kL http://10.10.10.10:8082/v1/inventory/1/inventory/2# Nvme# createcurl -X POST -f http://10.10.10.10:8082/v1/nvmeRemoteControllers?nvme_remote_controller_id=nvmetcp12 -d'{"multipath": "NVME_MULTIPATH_MULTIPATH"}'curl -X POST -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths?nvme_path_id=nvmetcp12path0 -d'{"traddr":"11.11.11.2", "trtype":"NVME_TRANSPORT_TYPE_TCP", "fabrics":{"subnqn":"nqn.2016-06.com.opi.spdk.target0", "trsvcid":"4444", "adrfam":"NVME_ADDRESS_FAMILY_IPV4", "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c"}}'curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems?nvme_subsystem_id=subsys0 -d'{"spec": {"nqn": "nqn.2022-09.io.spdk:opitest1"}}'curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces?nvme_namespace_id=namespace0 -d'{"spec": {"volume_name_ref": "Malloc0", "host_nsid": 10}}'curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers?nvme_controller_id=ctrl0 -d'{"spec": {"trtype": "NVME_TRANSPORT_TYPE_TCP", "fabrics_id":{"traddr": "127.0.0.1", "trsvcid": "4421", "adrfam": "NVME_ADDRESS_FAMILY_IPV4"}}}'# getcurl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0# listcurl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllerscurl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePathscurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystemscurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespacescurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers# statscurl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12:statscurl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0:statscurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0:statscurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0:statscurl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0:stats# updatecurl -X PATCH -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12 -d'{"multipath": "NVME_MULTIPATH_MULTIPATH"}'curl -X PATCH -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0 -d'{"traddr":"11.11.11.2", "trtype":"NVME_TRANSPORT_TYPE_TCP", "fabrics":{"subnqn":"nqn.2016-06.com.opi.spdk.target0", "trsvcid":"4444", "adrfam":"NVME_ADDRESS_FAMILY_IPV4", "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c"}}'curl -X PATCH -k http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0 -d'{"spec": {"volume_name_ref": "Malloc1", "host_nsid": 10}}'curl -X PATCH -k http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0 -d'{"spec": {"trtype": "NVME_TRANSPORT_TYPE_TCP", "fabrics_id":{"traddr": "127.0.0.1", "trsvcid": "4421", "adrfam": "NVME_ADDRESS_FAMILY_IPV4"}}}'# deletecurl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12
About
OPI gRPC to Marvell bridge third party repo