Unverified Commit 1c7b8be3 authored by Ti Chi Robot's avatar Ti Chi Robot Committed by GitHub
Browse files

Merge branch 'master' into promote-zhiqiang-maintainer

parents 6fb83b2a 4bc7ef58
Showing with 5435 additions and 64 deletions
+5435 -64
......@@ -7,10 +7,7 @@ https://github.com/chaos-mesh/chaos-mesh/blob/master/CONTRIBUTING.md
If you still have questions, please let us know via issues.
Please follow the Title Formats below when you open a new PR:
1. module[, module2, module3]: what's changed
2. *: what's changed
Please follow https://www.conventionalcommits.org/en/v1.0.0/ when you open a new PR:
-->
### What problem does this PR solve?
......@@ -25,12 +22,11 @@ Please follow the Title Formats below when you open a new PR:
### Related changes
- [ ] Need to update `chaos-mesh/website`
- [ ] Need to update `Dashboard UI`
- [ ] This change also requires further upadtes to `chaos-mesh/website` (e.g. docs)
- [ ] This change also requires further updates to `UI interface`
- Need to **cheery-pick to release branches**
- [ ] release-2.3
- [ ] release-2.2
- [ ] release-2.1
### Checklist
......@@ -54,7 +50,7 @@ Tests
Side effects
- [ ] Breaking backward compatibility
- [ ] **Breaking backward compatibility**
### DCO
......
......@@ -10,12 +10,44 @@ For more information and how-to, see [RFC: Keep A Changelog](https://github.com/
### Added
- Add `RemoteCluster` resource type [#3342](https://github.com/chaos-mesh/chaos-mesh/pull/3342)
- Add `clusterregistry` package to help developers to develop multi-cluster reconciler [#3342](https://github.com/chaos-mesh/chaos-mesh/pull/3342)
### Changed
- Replace io/ioutil package with os package. [#3539](https://github.com/chaos-mesh/chaos-mesh/pull/3539)
### Deprecated
- Nothing
### Removed
- Nothing
### Fixed
- Protect chaos available namespace and filter namespaces if needed [#3473](https://github.com/chaos-mesh/chaos-mesh/pull/3473)
- Respect flag `enableProfiling` and do not register profiler endpoints when it's false [#3474](https://github.com/chaos-mesh/chaos-mesh/pull/3474)
- Fix the blank screen after creating chaos experiment with "By YAML" [#3489](https://github.com/chaos-mesh/chaos-mesh/pull/3489)
- Update hint text about the manual token generating process for Kubernetes 1.24+ [#3505](https://github.com/chaos-mesh/chaos-mesh/pull/3505)
- Fix IOChaos `containerNames` field in UI [#3533](https://github.com/chaos-mesh/chaos-mesh/pull/3533)
- Fix BlockChaos can't show Chinese name. [#3536](https://github.com/chaos-mesh/chaos-mesh/pull/3536)
- Add `omitempty` JSON tag to optional fields of the CRD objects. [#3531](https://github.com/chaos-mesh/chaos-mesh/pull/3531)
### Security
- Nothing
## [2.3.0] - 2022-07-29
### Added
- Add more status for record [#3170](https://github.com/chaos-mesh/chaos-mesh/pull/3170)
- Add `chaosDaemon.updateStrategy` to Helm chart to allow configuring `DaemonSetUpdateStrategy` for chaos-daemon [#3108](https://github.com/chaos-mesh/chaos-mesh/pull/3108)
- Add AArch64 support for TimeChaos [#3088](https://github.com/chaos-mesh/chaos-mesh/pull/3088)
- Add integration test and link test on arm [#3177](https://github.com/chaos-mesh/chaos-mesh/pull/3177)
- Add `spec.privateKey.rotationPolicy` to Certificates, to comply with requirements in cert-manager 1.8 [#3325](https://github.com/chaos-mesh/chaos-mesh/pull/3325)
- Add `RemoteCluster` resource type [#3342](https://github.com/chaos-mesh/chaos-mesh/pull/3342)
- Add `clusterregistry` package to help developers to develop multi-cluster reconciler [#3342](https://github.com/chaos-mesh/chaos-mesh/pull/3342)
- Support `Suspend` in next generation `New Workflow`'s UI [#3254](https://github.com/chaos-mesh/chaos-mesh/pull/3254)
- Add helm annotations for Artifact Hub [#3355](https://github.com/chaos-mesh/chaos-mesh/pull/3355)
......@@ -25,6 +57,7 @@ For more information and how-to, see [RFC: Keep A Changelog](https://github.com/
- Add `QPS` and `Burst` for Chaos Dashboard Configuration [#3476](https://github.com/chaos-mesh/chaos-mesh/pull/3476)
- Add guide and example for monitoring Chaos Mesh [#3030](https://github.com/chaos-mesh/chaos-mesh/pull/3030)
- Support `KernelChaos` in `AutoForm` [#3449](https://github.com/chaos-mesh/chaos-mesh/pull/3449)
- Sync latest Chaosd and PhysicalMachineChaos [#3477](https://github.com/chaos-mesh/chaos-mesh/pull/3477)
### Changed
......@@ -45,6 +78,7 @@ For more information and how-to, see [RFC: Keep A Changelog](https://github.com/
- Simplified logic and add test case about finalizers. [#3422](https://github.com/chaos-mesh/chaos-mesh/pull/3422)
- Update API requests with OpenAPI generated client [#2926](https://github.com/chaos-mesh/chaos-mesh/pull/2926)
- Implement some missing methods in ctrl server [#3462](https://github.com/chaos-mesh/chaos-mesh/pull/3462)
- Use `net.Interfaces()` to implement `getAllInterfaces()` [#3484](https://github.com/chaos-mesh/chaos-mesh/pull/3484)
### Deprecated
......@@ -76,11 +110,7 @@ For more information and how-to, see [RFC: Keep A Changelog](https://github.com/
- Fix Workflow Validating Webhook Panic [#3413](https://github.com/chaos-mesh/chaos-mesh/pull/3413)
- Overwrite $IMAGE_BUILD_ENV_TAG with $IMAGE_TAG-$ARCH in `upload_env_image.yml` github action [#3444](https://github.com/chaos-mesh/chaos-mesh/pull/3444)
- Add a judgement of `enterNS` in `getAllInterfaces()` [#3459](https://github.com/chaos-mesh/chaos-mesh/pull/3459)
- Protect chaos available namespace and filter namespaces if needed [#3473](https://github.com/chaos-mesh/chaos-mesh/pull/3473)
- Respect flag `enableProfiling` and do not register profiler endpoints when it's false [#3474](https://github.com/chaos-mesh/chaos-mesh/pull/3474)
- Fix JVMChaos loading missing jar file for injection [#3491](https://github.com/chaos-mesh/chaos-mesh/pull/3491)
- Fix the blank screen after creating chaos experiment with "By YAML" [#3489](https://github.com/chaos-mesh/chaos-mesh/pull/3489)
- Update hint text about the manual token generating process for Kubernetes 1.24+ [#3505](https://github.com/chaos-mesh/chaos-mesh/pull/3505)
### Security
......
......@@ -52,7 +52,7 @@ type ChaosCondition struct {
Type ChaosConditionType `json:"type"`
Status corev1.ConditionStatus `json:"status"`
// +optional
Reason string `json:"reason"`
Reason string `json:"reason,omitempty"`
}
type DesiredPhase string
......
......@@ -51,7 +51,7 @@ type DNSChaos struct {
// +optional
// Most recently observed status of the chaos experiment about pods
Status DNSChaosStatus `json:"status"`
Status DNSChaosStatus `json:"status,omitempty"`
}
var _ InnerObjectWithSelector = (*DNSChaos)(nil)
......@@ -78,7 +78,7 @@ type DNSChaosSpec struct {
// The value is ["google.com", "github.*", "chaos-mes?.org"],
// will take effect on "google.com", "github.com" and "chaos-mesh.org"
// +optional
DomainNamePatterns []string `json:"patterns"`
DomainNamePatterns []string `json:"patterns,omitempty"`
}
// DNSChaosStatus defines the observed state of DNSChaos
......
......@@ -37,7 +37,7 @@ type NetworkChaos struct {
// +optional
// Most recently observed status of the chaos experiment about pods
Status NetworkChaosStatus `json:"status"`
Status NetworkChaosStatus `json:"status,omitempty"`
}
var _ InnerObjectWithCustomStatus = (*NetworkChaos)(nil)
......
......@@ -23,26 +23,49 @@ import (
type PhysicalMachineChaosAction string
var (
PMStressCPUAction PhysicalMachineChaosAction = "stress-cpu"
PMStressMemAction PhysicalMachineChaosAction = "stress-mem"
PMDiskWritePayloadAction PhysicalMachineChaosAction = "disk-write-payload"
PMDiskReadPayloadAction PhysicalMachineChaosAction = "disk-read-payload"
PMDiskFillAction PhysicalMachineChaosAction = "disk-fill"
PMNetworkCorruptAction PhysicalMachineChaosAction = "network-corrupt"
PMNetworkDuplicateAction PhysicalMachineChaosAction = "network-duplicate"
PMNetworkLossAction PhysicalMachineChaosAction = "network-loss"
PMNetworkDelayAction PhysicalMachineChaosAction = "network-delay"
PMNetworkPartitionAction PhysicalMachineChaosAction = "network-partition"
PMNetworkBandwidthAction PhysicalMachineChaosAction = "network-bandwidth"
PMNetworkDNSAction PhysicalMachineChaosAction = "network-dns"
PMProcessAction PhysicalMachineChaosAction = "process"
PMJVMExceptionAction PhysicalMachineChaosAction = "jvm-exception"
PMJVMGCAction PhysicalMachineChaosAction = "jvm-gc"
PMJVMLatencyAction PhysicalMachineChaosAction = "jvm-latency"
PMJVMReturnAction PhysicalMachineChaosAction = "jvm-return"
PMJVMStressAction PhysicalMachineChaosAction = "jvm-stress"
PMJVMRuleDataAction PhysicalMachineChaosAction = "jvm-rule-data"
PMClockAction PhysicalMachineChaosAction = "clock"
PMStressCPUAction PhysicalMachineChaosAction = "stress-cpu"
PMStressMemAction PhysicalMachineChaosAction = "stress-mem"
PMDiskWritePayloadAction PhysicalMachineChaosAction = "disk-write-payload"
PMDiskReadPayloadAction PhysicalMachineChaosAction = "disk-read-payload"
PMDiskFillAction PhysicalMachineChaosAction = "disk-fill"
PMNetworkCorruptAction PhysicalMachineChaosAction = "network-corrupt"
PMNetworkDuplicateAction PhysicalMachineChaosAction = "network-duplicate"
PMNetworkLossAction PhysicalMachineChaosAction = "network-loss"
PMNetworkDelayAction PhysicalMachineChaosAction = "network-delay"
PMNetworkPartitionAction PhysicalMachineChaosAction = "network-partition"
PMNetworkBandwidthAction PhysicalMachineChaosAction = "network-bandwidth"
PMNetworkDNSAction PhysicalMachineChaosAction = "network-dns"
PMNetworkFloodAction PhysicalMachineChaosAction = "network-flood"
PMNetworkDownAction PhysicalMachineChaosAction = "network-down"
PMProcessAction PhysicalMachineChaosAction = "process"
PMJVMExceptionAction PhysicalMachineChaosAction = "jvm-exception"
PMJVMGCAction PhysicalMachineChaosAction = "jvm-gc"
PMJVMLatencyAction PhysicalMachineChaosAction = "jvm-latency"
PMJVMReturnAction PhysicalMachineChaosAction = "jvm-return"
PMJVMStressAction PhysicalMachineChaosAction = "jvm-stress"
PMJVMRuleDataAction PhysicalMachineChaosAction = "jvm-rule-data"
PMJVMMySQLAction PhysicalMachineChaosAction = "jvm-mysql"
PMClockAction PhysicalMachineChaosAction = "clock"
PMRedisExpirationAction PhysicalMachineChaosAction = "redis-expiration"
PMRedisPenetrationAction PhysicalMachineChaosAction = "redis-penetration"
PMRedisCacheLimitAction PhysicalMachineChaosAction = "redis-cacheLimit"
PMRedisSentinelRestartAction PhysicalMachineChaosAction = "redis-restart"
PMRedisSentinelStopAction PhysicalMachineChaosAction = "redis-stop"
PMKafkaFillAction PhysicalMachineChaosAction = "kafka-fill"
PMKafkaFloodAction PhysicalMachineChaosAction = "kafka-flood"
PMKafkaIOAction PhysicalMachineChaosAction = "kafka-io"
PMHTTPAbortAction PhysicalMachineChaosAction = "http-abort"
PMHTTPDelayAction PhysicalMachineChaosAction = "http-delay"
PMHTTPConfigAction PhysicalMachineChaosAction = "http-config"
PMHTTPRequestAction PhysicalMachineChaosAction = "http-request"
PMFileCreateAction PhysicalMachineChaosAction = "file-create"
PMFileModifyPrivilegeAction PhysicalMachineChaosAction = "file-modify"
PMFileDeleteAction PhysicalMachineChaosAction = "file-delete"
PMFileRenameAction PhysicalMachineChaosAction = "file-rename"
PMFileAppendAction PhysicalMachineChaosAction = "file-append"
PMFileReplaceAction PhysicalMachineChaosAction = "file-replace"
PMVMAction PhysicalMachineChaosAction = "vm"
PMUserDefinedAction PhysicalMachineChaosAction = "user_defined"
)
// +kubebuilder:object:root=true
......@@ -60,12 +83,12 @@ type PhysicalMachineChaos struct {
// +optional
// Most recently observed status of the chaos experiment
Status PhysicalMachineChaosStatus `json:"status"`
Status PhysicalMachineChaosStatus `json:"status,omitempty"`
}
// PhysicalMachineChaosSpec defines the desired state of PhysicalMachineChaos
type PhysicalMachineChaosSpec struct {
// +kubebuilder:validation:Enum=stress-cpu;stress-mem;disk-read-payload;disk-write-payload;disk-fill;network-corrupt;network-duplicate;network-loss;network-delay;network-partition;network-dns;network-bandwidth;process;jvm-exception;jvm-gc;jvm-latency;jvm-return;jvm-stress;jvm-rule-data;clock
// +kubebuilder:validation:Enum=stress-cpu;stress-mem;disk-read-payload;disk-write-payload;disk-fill;network-corrupt;network-duplicate;network-loss;network-delay;network-partition;network-dns;network-bandwidth;network-flood;network-down;process;jvm-exception;jvm-gc;jvm-latency;jvm-return;jvm-stress;jvm-rule-data;jvm-mysql;clock;redis-expiration;redis-penetration;redis-cacheLimit;redis-restart;redis-stop;kafka-fill;kafka-flood;kafka-io;file-create;file-modify;file-delete;file-rename;file-append;file-replace;vm;user_defined;
Action PhysicalMachineChaosAction `json:"action"`
PhysicalMachineSelector `json:",inline"`
......@@ -97,7 +120,7 @@ type PhysicalMachineSelector struct {
// Selector is used to select physical machines that are used to inject chaos action.
// +optional
Selector PhysicalMachineSelectorSpec `json:"selector"`
Selector PhysicalMachineSelectorSpec `json:"selector,omitempty"`
// Mode defines the mode to run chaos action.
// Supported mode: one / all / fixed / fixed-percent / random-max-percent
......@@ -192,6 +215,14 @@ type ExpInfo struct {
// +optional
NetworkBandwidth *NetworkBandwidthSpec `json:"network-bandwidth,omitempty"`
// +ui:form:when=action=='network-flood'
// +optional
NetworkFlood *NetworkFloodSpec `json:"network-flood,omitempty"`
// +ui:form:when=action=='network-down'
// +optional
NetworkDown *NetworkDownSpec `json:"network-down,omitempty"`
// +ui:form:when=action=='process'
// +optional
Process *ProcessSpec `json:"process,omitempty"`
......@@ -220,9 +251,93 @@ type ExpInfo struct {
// +optional
JVMRuleData *JVMRuleDataSpec `json:"jvm-rule-data,omitempty"`
// +ui:form:when=action=='jvm-mysql'
// +optional
JVMMySQL *PMJVMMySQLSpec `json:"jvm-mysql,omitempty"`
// +ui:form:when=action=='clock'
// +optional
Clock *ClockSpec `json:"clock,omitempty"`
// +ui:form:when=action=='redis-expiration'
// +optional
RedisExpiration *RedisExpirationSpec `json:"redis-expiration,omitempty"`
// +ui:form:when=action=='redis-penetration'
// +optional
RedisPenetration *RedisPenetrationSpec `json:"redis-penetration,omitempty"`
// +ui:form:when=action=='redis-cacheLimit'
// +optional
RedisCacheLimit *RedisCacheLimitSpec `json:"redis-cacheLimit,omitempty"`
// +ui:form:when=action=='redis-restart'
// +optional
RedisSentinelRestart *RedisSentinelRestartSpec `json:"redis-restart,omitempty"`
// +ui:form:when=action=='redis-stop'
// +optional
RedisSentinelStop *RedisSentinelStopSpec `json:"redis-stop,omitempty"`
// +ui:form:when=action=='kafka-fill'
// +optional
KafkaFill *KafkaFillSpec `json:"kafka-fill,omitempty"`
// +ui:form:when=action=='kafka-flood'
// +optional
KafkaFlood *KafkaFloodSpec `json:"kafka-flood,omitempty"`
// +ui:form:when=action=='kafka-io'
// +optional
KafkaIO *KafkaIOSpec `json:"kafka-io,omitempty"`
// +ui:form:when=action=='http-abort'
// +optional
HTTPAbort *HTTPAbortSpec `json:"http-abort,omitempty"`
// +ui:form:when=action=='http-delay'
// +optional
HTTPDelay *HTTPDelaySpec `json:"http-delay,omitempty"`
// +ui:form:when=action=='http-config'
// +optional
HTTPConfig *HTTPConfigSpec `json:"http-config,omitempty"`
// +ui:form:when=action=='http-request'
// +optional
HTTPRequest *HTTPRequestSpec `json:"http-request,omitempty"`
// +ui:form:when=action=='file-create'
// +optional
FileCreate *FileCreateSpec `json:"file-create,omitempty"`
// +ui:form:when=action=='file-modify'
// +optional
FileModifyPrivilege *FileModifyPrivilegeSpec `json:"file-modify,omitempty"`
// +ui:form:when=action=='file-delete'
// +optional
FileDelete *FileDeleteSpec `json:"file-delete,omitempty"`
// +ui:form:when=action=='file-create'
// +optional
FileRename *FileRenameSpec `json:"file-rename,omitempty"`
// +ui:form:when=action=='file-append'
// +optional
FileAppend *FileAppendSpec `json:"file-append,omitempty"`
// +ui:form:when=action=='file-replace'
// +optional
FileReplace *FileReplaceSpec `json:"file-replace,omitempty"`
// +ui:form:when=action=='vm'
// +optional
VM *VMSpec `json:"vm,omitempty"`
// +ui:form:when=action=='user_defined'
// +optional
UserDefined *UserDefinedSpec `json:"user_defined,omitempty"`
}
type StressCPUSpec struct {
......@@ -356,6 +471,26 @@ type NetworkBandwidthSpec struct {
Hostname string `json:"hostname,omitempty"`
}
type NetworkFloodSpec struct {
// The speed of network traffic, allows bps, kbps, mbps, gbps, tbps unit. bps means bytes per second
Rate string `json:"rate"`
// Generate traffic to this IP address
IPAddress string `json:"ip-address,omitempty"`
// Generate traffic to this port on the IP address
Port string `json:"port,omitempty"`
// The number of iperf parallel client threads to run
Parallel int32 `json:"parallel,omitempty"`
// The number of seconds to run the iperf test
Duration string `json:"duration"`
}
type NetworkDownSpec struct {
// The network interface to impact
Device string `json:"device,omitempty"`
// NIC down time, time units: ns, us (or µs), ms, s, m, h.
Duration string `json:"duration,omitempty"`
}
type ProcessSpec struct {
// the process name or the process ID
Process string `json:"process,omitempty"`
......@@ -411,6 +546,20 @@ type JVMRuleDataSpec struct {
RuleData string `json:"rule-data,omitempty"`
}
type PMJVMMySQLSpec struct {
JVMCommonSpec `json:",inline"`
JVMMySQLSpec `json:",inline"`
// The exception which needs to throw for action `exception`
// or the exception message needs to throw in action `mysql`
ThrowException string `json:"exception,omitempty"`
// The latency duration for action 'latency'
// or the latency duration in action `mysql`
LatencyDuration int `json:"latency,omitempty"`
}
type ClockSpec struct {
// the pid of target program.
Pid int `json:"pid,omitempty"`
......@@ -421,3 +570,196 @@ type ClockSpec struct {
// Muti clock ids should be split with ","
ClockIdsSlice string `json:"clock-ids-slice,omitempty"`
}
type RedisCommonSpec struct {
// The adress of Redis server
Addr string `json:"addr,omitempty"`
// The password of Redis server
Password string `json:"password,omitempty"`
}
type RedisExpirationSpec struct {
RedisCommonSpec `json:",inline"`
// The expiration of the keys
Expiration string `json:"expiration,omitempty"`
// The keys to be expired
Key string `json:"key,omitempty"`
// Additional options for `expiration`
Option string `json:"option,omitempty"`
}
type RedisPenetrationSpec struct {
RedisCommonSpec `json:",inline"`
// The number of requests to be sent
RequestNum int `json:"requestNum,omitempty"`
}
type RedisCacheLimitSpec struct {
RedisCommonSpec `json:",inline"`
// The size of `maxmemory`
Size string `json:"cacheSize,omitempty"`
// Specifies maxmemory as a percentage of the original value
Percent string `json:"percent,omitempty"`
}
type RedisSentinelRestartSpec struct {
RedisCommonSpec `json:",inline"`
// The path of Sentinel conf
Conf string `json:"conf,omitempty"`
// The control flag determines whether to flush config
FlushConfig bool `json:"flushConfig,omitempty"`
// The path of `redis-server` command-line tool
RedisPath bool `json:"redisPath,omitempty"`
}
type RedisSentinelStopSpec struct {
RedisCommonSpec `json:",inline"`
// The path of Sentinel conf
Conf string `json:"conf,omitempty"`
// The control flag determines whether to flush config
FlushConfig bool `json:"flushConfig,omitempty"`
// The path of `redis-server` command-line tool
RedisPath bool `json:"redisPath,omitempty"`
}
type KafkaCommonSpec struct {
// The topic to attack
Topic string `json:"topic,omitempty"`
// The host of kafka server
Host string `json:"host,omitempty"`
// The port of kafka server
Port uint16 `json:"port,omitempty"`
// The username of kafka client
Username string `json:"username,omitempty"`
// The password of kafka client
Password string `json:"password,omitempty"`
}
type KafkaFillSpec struct {
KafkaCommonSpec `json:",inline"`
// The size of each message
MessageSize uint `json:"messageSize,omitempty"`
// The max bytes to fill
MaxBytes uint64 `json:"maxBytes,omitempty"`
// The command to reload kafka config
ReloadCommand string `json:"reloadCommand,omitempty"`
}
type KafkaFloodSpec struct {
KafkaCommonSpec `json:",inline"`
// The size of each message
MessageSize uint `json:"messageSize,omitempty"`
// The number of worker threads
Threads uint `json:"threads,omitempty"`
}
type KafkaIOSpec struct {
// The topic to attack
Topic string `json:"topic,omitempty"`
// The path of server config
ConfigFile string `json:"configFile,omitempty"`
// Make kafka cluster non-readable
NonReadable bool `json:"nonReadable,omitempty"`
// Make kafka cluster non-writable
NonWritable bool `json:"nonWritable,omitempty"`
}
type HTTPCommonSpec struct {
// Composed with one of the port of HTTP connection, we will only attack HTTP connection with port inside proxy_ports
ProxyPorts []uint `json:"proxy_ports"`
// HTTP target: Request or Response
Target string `json:"target"`
// The TCP port that the target service listens on
Port int32 `json:"port,omitempty"`
// Match path of Uri with wildcard matches
Path string `json:"path,omitempty"`
// HTTP method
Method string `json:"method,omitempty"`
// Code is a rule to select target by http status code in response
Code string `json:"code,omitempty"`
}
type HTTPAbortSpec struct {
HTTPCommonSpec `json:",inline"`
}
type HTTPDelaySpec struct {
HTTPCommonSpec `json:",inline"`
// Delay represents the delay of the target request/response
Delay string `json:"delay"`
}
type HTTPConfigSpec struct {
// The config file path
FilePath string `json:"file_path,omitempty"`
}
// used for HTTP request, now only support GET
type HTTPRequestSpec struct {
// Request to send"
URL string `json:"url,omitempty"`
// Enable connection pool
EnableConnPool bool `json:"enable-conn-pool,omitempty"`
// The number of requests to send
Count int `json:"count,omitempty"`
}
type FileCreateSpec struct {
// FileName is the name of the file to be created, modified, deleted, renamed, or appended.
FileName string `json:"file-name,omitempty"`
// DirName is the directory name to create or delete.
DirName string `json:"dir-name,omitempty"`
}
type FileModifyPrivilegeSpec struct {
// FileName is the name of the file to be created, modified, deleted, renamed, or appended.
FileName string `json:"file-name,omitempty"`
// Privilege is the file privilege to be set.
Privilege uint32 `json:"privilege,omitempty"`
}
type FileDeleteSpec struct {
// FileName is the name of the file to be created, modified, deleted, renamed, or appended.
FileName string `json:"file-name,omitempty"`
// DirName is the directory name to create or delete.
DirName string `json:"dir-name,omitempty"`
}
type FileRenameSpec struct {
// SourceFile is the name need to be renamed.
SourceFile string `json:"source-file,omitempty"`
// DestFile is the name to be renamed.
DestFile string `json:"dest-file,omitempty"`
}
type FileAppendSpec struct {
// FileName is the name of the file to be created, modified, deleted, renamed, or appended.
FileName string `json:"file-name,omitempty"`
// Data is the data for append.
Data string `json:"data,omitempty"`
// Count is the number of times to append the data.
Count int `json:"count,omitempty"`
}
type FileReplaceSpec struct {
// FileName is the name of the file to be created, modified, deleted, renamed, or appended.
FileName string `json:"file-name,omitempty"`
// OriginStr is the origin string of the file.
OriginStr string `json:"origin-string,omitempty"`
// DestStr is the destination string of the file.
DestStr string `json:"dest-string,omitempty"`
// Line is the line number of the file to be replaced.
Line int `json:"line,omitempty"`
}
type VMSpec struct {
// The name of the VM to be injected
VMName string `json:"vm-name,omitempty"`
}
type UserDefinedSpec struct {
// The command to be executed when attack
AttackCmd string `json:"attackCmd,omitempty"`
// The command to be executed when recover
RecoverCmd string `json:"recoverCmd,omitempty"`
}
......@@ -116,6 +116,10 @@ func (in *PhysicalMachineChaosSpec) Validate(root interface{}, path *field.Path)
validateConfigErr = validateNetworkBandwidthAction(in.NetworkBandwidth)
case PMNetworkDNSAction:
validateConfigErr = validateNetworkDNSAction(in.NetworkDNS)
case PMNetworkFloodAction:
validateConfigErr = validateNetworkFlood(in.NetworkFlood)
case PMNetworkDownAction:
validateConfigErr = validateNetworkDownAction(in.NetworkDown)
case PMProcessAction:
validateConfigErr = validateProcessAction(in.Process)
case PMJVMExceptionAction:
......@@ -130,8 +134,40 @@ func (in *PhysicalMachineChaosSpec) Validate(root interface{}, path *field.Path)
validateConfigErr = validateJVMStressAction(in.JVMStress)
case PMJVMRuleDataAction:
validateConfigErr = validateJVMRuleDataAction(in.JVMRuleData)
case PMJVMMySQLAction:
validateConfigErr = validateJVMMySQLAction(in.JVMMySQL)
case PMClockAction:
validateConfigErr = validateClockAction(in.Clock)
case PMRedisExpirationAction:
validateConfigErr = validateRedisExpirationAction(in.RedisExpiration)
case PMRedisCacheLimitAction:
validateConfigErr = validateRedisCacheLimitAction(in.RedisCacheLimit)
case PMRedisPenetrationAction:
validateConfigErr = validateRedisPenetrationAction(in.RedisPenetration)
case PMRedisSentinelStopAction:
validateConfigErr = validateRedisSentinelStopAction(in.RedisSentinelStop)
case PMRedisSentinelRestartAction:
validateConfigErr = validateRedisSentinelRestartAction(in.RedisSentinelRestart)
case PMKafkaFillAction:
validateConfigErr = validateKafkaFillAction(in.KafkaFill)
case PMKafkaFloodAction:
validateConfigErr = validateKafkaFloodAction(in.KafkaFlood)
case PMKafkaIOAction:
validateConfigErr = validateKafkaIOAction(in.KafkaIO)
case PMFileCreateAction:
validateConfigErr = validateFileCreateAction(in.FileCreate)
case PMFileModifyPrivilegeAction:
validateConfigErr = validateFileModifyPrivilegeAction(in.FileModifyPrivilege)
case PMFileDeleteAction:
validateConfigErr = validateFileDeleteAction(in.FileDelete)
case PMFileRenameAction:
validateConfigErr = validateFileRenameAction(in.FileRename)
case PMFileAppendAction:
validateConfigErr = validateFileAppendAction(in.FileAppend)
case PMFileReplaceAction:
validateConfigErr = validateFileReplaceAction(in.FileReplace)
case PMUserDefinedAction:
validateConfigErr = validateUserDefinedAction(in.UserDefined)
default:
}
......@@ -300,6 +336,37 @@ func validateNetworkDNSAction(spec *NetworkDNSSpec) error {
return nil
}
func validateNetworkFlood(spec *NetworkFloodSpec) error {
if len(spec.IPAddress) == 0 {
return errors.New("ip-address is required")
}
if len(spec.Port) == 0 {
return errors.New("port is required")
}
if len(spec.Rate) == 0 {
return errors.New("rate is required")
}
if len(spec.Duration) == 0 {
return errors.New("duration is required")
}
return nil
}
func validateNetworkDownAction(spec *NetworkDownSpec) error {
if len(spec.Device) == 0 {
return errors.New("device is required")
}
if len(spec.Duration) == 0 {
return errors.New("duration is required")
}
return nil
}
func validateProcessAction(spec *ProcessSpec) error {
if len(spec.Process) == 0 {
return errors.New("process is required")
......@@ -404,6 +471,21 @@ func validateJVMRuleDataAction(spec *JVMRuleDataSpec) error {
return nil
}
func validateJVMMySQLAction(spec *PMJVMMySQLSpec) error {
if err := CheckPid(spec.Pid); err != nil {
return err
}
if len(spec.MySQLConnectorVersion) == 0 {
return errors.New("MySQL connector version not provided")
}
if len(spec.ThrowException) == 0 && spec.LatencyDuration == 0 {
return errors.New("must set one of exception or latency")
}
return nil
}
func validateClockAction(spec *ClockSpec) error {
if err := CheckPid(spec.Pid); err != nil {
return err
......@@ -487,3 +569,185 @@ func (in *NetworkBandwidthSpec) Validate(root interface{}, path *field.Path) fie
return allErrs
}
var ValidOptions = map[string]bool{"XX": true, "NX": true, "GT": true, "LT": true}
func validateRedisCommonAction(spec *RedisCommonSpec) error {
if len(spec.Addr) == 0 {
return errors.New("addr of redis server is required")
}
return nil
}
func validateRedisExpirationAction(spec *RedisExpirationSpec) error {
if err := validateRedisCommonAction(&spec.RedisCommonSpec); err != nil {
return err
}
if _, ok := ValidOptions[spec.Option]; ok {
return errors.New("option invalid")
}
return nil
}
func validateRedisCacheLimitAction(spec *RedisCacheLimitSpec) error {
if err := validateRedisCommonAction(&spec.RedisCommonSpec); err != nil {
return err
}
if spec.Size != "0" && spec.Percent != "" {
return errors.New("only one of size and percent can be set")
}
return nil
}
func validateRedisPenetrationAction(spec *RedisPenetrationSpec) error {
if err := validateRedisCommonAction(&spec.RedisCommonSpec); err != nil {
return err
}
if spec.RequestNum == 0 {
return errors.New("requestNum is required")
}
return nil
}
func validateRedisSentinelStopAction(spec *RedisSentinelStopSpec) error {
return validateRedisCommonAction(&spec.RedisCommonSpec)
}
func validateRedisSentinelRestartAction(spec *RedisSentinelRestartSpec) error {
if err := validateRedisCommonAction(&spec.RedisCommonSpec); err != nil {
return err
}
if len(spec.Conf) == 0 {
return errors.New("conf is required to restart the sentinel")
}
return nil
}
func validateKafkaCommonAction(spec *KafkaCommonSpec) error {
if spec.Host == "" {
return errors.New("host is required")
}
if spec.Port == 0 {
return errors.New("port is required")
}
return nil
}
func validateKafkaFillAction(spec *KafkaFillSpec) error {
if err := validateKafkaCommonAction(&spec.KafkaCommonSpec); err != nil {
return err
}
if spec.MaxBytes == 0 {
return errors.New("max bytes is required")
}
if spec.ReloadCommand == "" {
return errors.New("reload command is required")
}
return nil
}
func validateKafkaFloodAction(spec *KafkaFloodSpec) error {
if err := validateKafkaCommonAction(&spec.KafkaCommonSpec); err != nil {
return err
}
if spec.Threads == 0 {
return errors.New("threads is required")
}
return nil
}
func validateKafkaIOAction(spec *KafkaIOSpec) error {
if !spec.NonReadable && !spec.NonWritable {
return errors.New("at least one of non-readable or non-writable is required")
}
return nil
}
func validateFileCreateAction(spec *FileCreateSpec) error {
if len(spec.FileName) == 0 && len(spec.DirName) == 0 {
return errors.New("one of file-name and dir-name is required")
}
return nil
}
func validateFileModifyPrivilegeAction(spec *FileModifyPrivilegeSpec) error {
if len(spec.FileName) == 0 {
return errors.New("file name is required")
}
if spec.Privilege == 0 {
return errors.New("file privilege is required")
}
return nil
}
func validateFileDeleteAction(spec *FileDeleteSpec) error {
if len(spec.FileName) == 0 && len(spec.DirName) == 0 {
return errors.New("one of file-name and dir-name is required")
}
return nil
}
func validateFileRenameAction(spec *FileRenameSpec) error {
if len(spec.SourceFile) == 0 || len(spec.DestFile) == 0 {
return errors.New("both source file and destination file are required")
}
return nil
}
func validateFileAppendAction(spec *FileAppendSpec) error {
if len(spec.FileName) == 0 {
return errors.New("file-name is required")
}
if len(spec.Data) == 0 {
return errors.New("append data is required")
}
return nil
}
func validateFileReplaceAction(spec *FileReplaceSpec) error {
if len(spec.FileName) == 0 {
return errors.New("file-name is required")
}
if len(spec.OriginStr) == 0 || len(spec.DestStr) == 0 {
return errors.New("both origin and destination string are required")
}
return nil
}
func validateUserDefinedAction(spec *UserDefinedSpec) error {
if len(spec.AttackCmd) == 0 {
return errors.New("attack command not provided")
}
if len(spec.RecoverCmd) == 0 {
return errors.New("recover command not provided")
}
return nil
}
......@@ -33,7 +33,7 @@ type PodChaos struct {
// +optional
// Most recently observed status of the chaos experiment about pods
Status PodChaosStatus `json:"status"`
Status PodChaosStatus `json:"status,omitempty"`
}
var _ InnerObjectWithSelector = (*PodChaos)(nil)
......@@ -75,7 +75,7 @@ type PodChaosSpec struct {
// Value must be non-negative integer. The default value is zero that indicates delete immediately.
// +optional
// +kubebuilder:validation:Minimum=0
GracePeriod int64 `json:"gracePeriod"`
GracePeriod int64 `json:"gracePeriod,omitempty"`
}
// PodChaosStatus represents the current status of the chaos experiment about pods.
......
......@@ -37,7 +37,7 @@ type PodNetworkChaos struct {
// +optional
// Most recently observed status of the chaos experiment about pods
Status PodNetworkChaosStatus `json:"status"`
Status PodNetworkChaosStatus `json:"status,omitempty"`
}
// PodNetworkChaosSpec defines the desired state of PodNetworkChaos
......
......@@ -33,7 +33,7 @@ type Schedule struct {
Spec ScheduleSpec `json:"spec"`
// +optional
Status ScheduleStatus `json:"status"`
Status ScheduleStatus `json:"status,omitempty"`
}
type ConcurrencyPolicy string
......
......@@ -36,7 +36,7 @@ type StatusCheck struct {
// +optional
// Most recently observed status of status check
Status StatusCheckStatus `json:"status"`
Status StatusCheckStatus `json:"status,omitempty"`
}
type StatusCheckMode string
......
......@@ -37,7 +37,7 @@ type StressChaos struct {
// +optional
// Most recently observed status of the time chaos experiment
Status StressChaosStatus `json:"status"`
Status StressChaosStatus `json:"status,omitempty"`
}
var _ InnerObjectWithCustomStatus = (*StressChaos)(nil)
......@@ -80,16 +80,16 @@ type StressChaosStatus struct {
type StressInstance struct {
// UID is the stress-ng identifier
// +optional
UID string `json:"uid"`
UID string `json:"uid,omitempty"`
// MemoryUID is the memStress identifier
// +optional
MemoryUID string `json:"memoryUid"`
MemoryUID string `json:"memoryUid,omitempty"`
// StartTime specifies when the stress-ng starts
// +optional
StartTime *metav1.Time `json:"startTime"`
StartTime *metav1.Time `json:"startTime,omitempty"`
// MemoryStartTime specifies when the memStress starts
// +optional
MemoryStartTime *metav1.Time `json:"memoryStartTime"`
MemoryStartTime *metav1.Time `json:"memoryStartTime,omitempty"`
}
// Stressors defines plenty of stressors supported to stress system components out.
......
......@@ -33,7 +33,7 @@ type TimeChaos struct {
// +optional
// Most recently observed status of the time chaos experiment
Status TimeChaosStatus `json:"status"`
Status TimeChaosStatus `json:"status,omitempty"`
}
var _ InnerObjectWithSelector = (*TimeChaos)(nil)
......
......@@ -41,7 +41,7 @@ type WorkflowNode struct {
// +optional
// Most recently observed status of the workflow node
Status WorkflowNodeStatus `json:"status"`
Status WorkflowNodeStatus `json:"status,omitempty"`
}
type WorkflowNodeSpec struct {
......
......@@ -1097,6 +1097,16 @@ func (in *ExpInfo) DeepCopyInto(out *ExpInfo) {
*out = new(NetworkBandwidthSpec)
(*in).DeepCopyInto(*out)
}
if in.NetworkFlood != nil {
in, out := &in.NetworkFlood, &out.NetworkFlood
*out = new(NetworkFloodSpec)
**out = **in
}
if in.NetworkDown != nil {
in, out := &in.NetworkDown, &out.NetworkDown
*out = new(NetworkDownSpec)
**out = **in
}
if in.Process != nil {
in, out := &in.Process, &out.Process
*out = new(ProcessSpec)
......@@ -1132,11 +1142,116 @@ func (in *ExpInfo) DeepCopyInto(out *ExpInfo) {
*out = new(JVMRuleDataSpec)
**out = **in
}
if in.JVMMySQL != nil {
in, out := &in.JVMMySQL, &out.JVMMySQL
*out = new(PMJVMMySQLSpec)
**out = **in
}
if in.Clock != nil {
in, out := &in.Clock, &out.Clock
*out = new(ClockSpec)
**out = **in
}
if in.RedisExpiration != nil {
in, out := &in.RedisExpiration, &out.RedisExpiration
*out = new(RedisExpirationSpec)
**out = **in
}
if in.RedisPenetration != nil {
in, out := &in.RedisPenetration, &out.RedisPenetration
*out = new(RedisPenetrationSpec)
**out = **in
}
if in.RedisCacheLimit != nil {
in, out := &in.RedisCacheLimit, &out.RedisCacheLimit
*out = new(RedisCacheLimitSpec)
**out = **in
}
if in.RedisSentinelRestart != nil {
in, out := &in.RedisSentinelRestart, &out.RedisSentinelRestart
*out = new(RedisSentinelRestartSpec)
**out = **in
}
if in.RedisSentinelStop != nil {
in, out := &in.RedisSentinelStop, &out.RedisSentinelStop
*out = new(RedisSentinelStopSpec)
**out = **in
}
if in.KafkaFill != nil {
in, out := &in.KafkaFill, &out.KafkaFill
*out = new(KafkaFillSpec)
**out = **in
}
if in.KafkaFlood != nil {
in, out := &in.KafkaFlood, &out.KafkaFlood
*out = new(KafkaFloodSpec)
**out = **in
}
if in.KafkaIO != nil {
in, out := &in.KafkaIO, &out.KafkaIO
*out = new(KafkaIOSpec)
**out = **in
}
if in.HTTPAbort != nil {
in, out := &in.HTTPAbort, &out.HTTPAbort
*out = new(HTTPAbortSpec)
(*in).DeepCopyInto(*out)
}
if in.HTTPDelay != nil {
in, out := &in.HTTPDelay, &out.HTTPDelay
*out = new(HTTPDelaySpec)
(*in).DeepCopyInto(*out)
}
if in.HTTPConfig != nil {
in, out := &in.HTTPConfig, &out.HTTPConfig
*out = new(HTTPConfigSpec)
**out = **in
}
if in.HTTPRequest != nil {
in, out := &in.HTTPRequest, &out.HTTPRequest
*out = new(HTTPRequestSpec)
**out = **in
}
if in.FileCreate != nil {
in, out := &in.FileCreate, &out.FileCreate
*out = new(FileCreateSpec)
**out = **in
}
if in.FileModifyPrivilege != nil {
in, out := &in.FileModifyPrivilege, &out.FileModifyPrivilege
*out = new(FileModifyPrivilegeSpec)
**out = **in
}
if in.FileDelete != nil {
in, out := &in.FileDelete, &out.FileDelete
*out = new(FileDeleteSpec)
**out = **in
}
if in.FileRename != nil {
in, out := &in.FileRename, &out.FileRename
*out = new(FileRenameSpec)
**out = **in
}
if in.FileAppend != nil {
in, out := &in.FileAppend, &out.FileAppend
*out = new(FileAppendSpec)
**out = **in
}
if in.FileReplace != nil {
in, out := &in.FileReplace, &out.FileReplace
*out = new(FileReplaceSpec)
**out = **in
}
if in.VM != nil {
in, out := &in.VM, &out.VM
*out = new(VMSpec)
**out = **in
}
if in.UserDefined != nil {
in, out := &in.UserDefined, &out.UserDefined
*out = new(UserDefinedSpec)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExpInfo.
......@@ -1200,6 +1315,96 @@ func (in *FailKernRequest) DeepCopy() *FailKernRequest {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileAppendSpec) DeepCopyInto(out *FileAppendSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileAppendSpec.
func (in *FileAppendSpec) DeepCopy() *FileAppendSpec {
if in == nil {
return nil
}
out := new(FileAppendSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileCreateSpec) DeepCopyInto(out *FileCreateSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileCreateSpec.
func (in *FileCreateSpec) DeepCopy() *FileCreateSpec {
if in == nil {
return nil
}
out := new(FileCreateSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileDeleteSpec) DeepCopyInto(out *FileDeleteSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileDeleteSpec.
func (in *FileDeleteSpec) DeepCopy() *FileDeleteSpec {
if in == nil {
return nil
}
out := new(FileDeleteSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileModifyPrivilegeSpec) DeepCopyInto(out *FileModifyPrivilegeSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileModifyPrivilegeSpec.
func (in *FileModifyPrivilegeSpec) DeepCopy() *FileModifyPrivilegeSpec {
if in == nil {
return nil
}
out := new(FileModifyPrivilegeSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileRenameSpec) DeepCopyInto(out *FileRenameSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileRenameSpec.
func (in *FileRenameSpec) DeepCopy() *FileRenameSpec {
if in == nil {
return nil
}
out := new(FileRenameSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FileReplaceSpec) DeepCopyInto(out *FileReplaceSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FileReplaceSpec.
func (in *FileReplaceSpec) DeepCopy() *FileReplaceSpec {
if in == nil {
return nil
}
out := new(FileReplaceSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Filter) DeepCopyInto(out *Filter) {
*out = *in
......@@ -1428,6 +1633,22 @@ func (in *GenericSelectorSpec) DeepCopy() *GenericSelectorSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPAbortSpec) DeepCopyInto(out *HTTPAbortSpec) {
*out = *in
in.HTTPCommonSpec.DeepCopyInto(&out.HTTPCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPAbortSpec.
func (in *HTTPAbortSpec) DeepCopy() *HTTPAbortSpec {
if in == nil {
return nil
}
out := new(HTTPAbortSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPChaos) DeepCopyInto(out *HTTPChaos) {
*out = *in
......@@ -1561,6 +1782,41 @@ func (in *HTTPChaosStatus) DeepCopy() *HTTPChaosStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPCommonSpec) DeepCopyInto(out *HTTPCommonSpec) {
*out = *in
if in.ProxyPorts != nil {
in, out := &in.ProxyPorts, &out.ProxyPorts
*out = make([]uint, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPCommonSpec.
func (in *HTTPCommonSpec) DeepCopy() *HTTPCommonSpec {
if in == nil {
return nil
}
out := new(HTTPCommonSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPConfigSpec) DeepCopyInto(out *HTTPConfigSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPConfigSpec.
func (in *HTTPConfigSpec) DeepCopy() *HTTPConfigSpec {
if in == nil {
return nil
}
out := new(HTTPConfigSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPCriteria) DeepCopyInto(out *HTTPCriteria) {
*out = *in
......@@ -1576,6 +1832,37 @@ func (in *HTTPCriteria) DeepCopy() *HTTPCriteria {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPDelaySpec) DeepCopyInto(out *HTTPDelaySpec) {
*out = *in
in.HTTPCommonSpec.DeepCopyInto(&out.HTTPCommonSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPDelaySpec.
func (in *HTTPDelaySpec) DeepCopy() *HTTPDelaySpec {
if in == nil {
return nil
}
out := new(HTTPDelaySpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPRequestSpec) DeepCopyInto(out *HTTPRequestSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPRequestSpec.
func (in *HTTPRequestSpec) DeepCopy() *HTTPRequestSpec {
if in == nil {
return nil
}
out := new(HTTPRequestSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPStatusCheck) DeepCopyInto(out *HTTPStatusCheck) {
*out = *in
......@@ -2046,6 +2333,68 @@ func (in *JVMStressSpec) DeepCopy() *JVMStressSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KafkaCommonSpec) DeepCopyInto(out *KafkaCommonSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KafkaCommonSpec.
func (in *KafkaCommonSpec) DeepCopy() *KafkaCommonSpec {
if in == nil {
return nil
}
out := new(KafkaCommonSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KafkaFillSpec) DeepCopyInto(out *KafkaFillSpec) {
*out = *in
out.KafkaCommonSpec = in.KafkaCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KafkaFillSpec.
func (in *KafkaFillSpec) DeepCopy() *KafkaFillSpec {
if in == nil {
return nil
}
out := new(KafkaFillSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KafkaFloodSpec) DeepCopyInto(out *KafkaFloodSpec) {
*out = *in
out.KafkaCommonSpec = in.KafkaCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KafkaFloodSpec.
func (in *KafkaFloodSpec) DeepCopy() *KafkaFloodSpec {
if in == nil {
return nil
}
out := new(KafkaFloodSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KafkaIOSpec) DeepCopyInto(out *KafkaIOSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KafkaIOSpec.
func (in *KafkaIOSpec) DeepCopy() *KafkaIOSpec {
if in == nil {
return nil
}
out := new(KafkaIOSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KernelChaos) DeepCopyInto(out *KernelChaos) {
*out = *in
......@@ -2416,6 +2765,21 @@ func (in *NetworkDelaySpec) DeepCopy() *NetworkDelaySpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NetworkDownSpec) DeepCopyInto(out *NetworkDownSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkDownSpec.
func (in *NetworkDownSpec) DeepCopy() *NetworkDownSpec {
if in == nil {
return nil
}
out := new(NetworkDownSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NetworkDuplicateSpec) DeepCopyInto(out *NetworkDuplicateSpec) {
*out = *in
......@@ -2432,6 +2796,21 @@ func (in *NetworkDuplicateSpec) DeepCopy() *NetworkDuplicateSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NetworkFloodSpec) DeepCopyInto(out *NetworkFloodSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkFloodSpec.
func (in *NetworkFloodSpec) DeepCopy() *NetworkFloodSpec {
if in == nil {
return nil
}
out := new(NetworkFloodSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NetworkLossSpec) DeepCopyInto(out *NetworkLossSpec) {
*out = *in
......@@ -2463,6 +2842,23 @@ func (in *NetworkPartitionSpec) DeepCopy() *NetworkPartitionSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PMJVMMySQLSpec) DeepCopyInto(out *PMJVMMySQLSpec) {
*out = *in
out.JVMCommonSpec = in.JVMCommonSpec
out.JVMMySQLSpec = in.JVMMySQLSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PMJVMMySQLSpec.
func (in *PMJVMMySQLSpec) DeepCopy() *PMJVMMySQLSpec {
if in == nil {
return nil
}
out := new(PMJVMMySQLSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PhysicalMachine) DeepCopyInto(out *PhysicalMachine) {
*out = *in
......@@ -3514,6 +3910,101 @@ func (in *RecordEvent) DeepCopy() *RecordEvent {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisCacheLimitSpec) DeepCopyInto(out *RedisCacheLimitSpec) {
*out = *in
out.RedisCommonSpec = in.RedisCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisCacheLimitSpec.
func (in *RedisCacheLimitSpec) DeepCopy() *RedisCacheLimitSpec {
if in == nil {
return nil
}
out := new(RedisCacheLimitSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisCommonSpec) DeepCopyInto(out *RedisCommonSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisCommonSpec.
func (in *RedisCommonSpec) DeepCopy() *RedisCommonSpec {
if in == nil {
return nil
}
out := new(RedisCommonSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisExpirationSpec) DeepCopyInto(out *RedisExpirationSpec) {
*out = *in
out.RedisCommonSpec = in.RedisCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisExpirationSpec.
func (in *RedisExpirationSpec) DeepCopy() *RedisExpirationSpec {
if in == nil {
return nil
}
out := new(RedisExpirationSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisPenetrationSpec) DeepCopyInto(out *RedisPenetrationSpec) {
*out = *in
out.RedisCommonSpec = in.RedisCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisPenetrationSpec.
func (in *RedisPenetrationSpec) DeepCopy() *RedisPenetrationSpec {
if in == nil {
return nil
}
out := new(RedisPenetrationSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisSentinelRestartSpec) DeepCopyInto(out *RedisSentinelRestartSpec) {
*out = *in
out.RedisCommonSpec = in.RedisCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisSentinelRestartSpec.
func (in *RedisSentinelRestartSpec) DeepCopy() *RedisSentinelRestartSpec {
if in == nil {
return nil
}
out := new(RedisSentinelRestartSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RedisSentinelStopSpec) DeepCopyInto(out *RedisSentinelStopSpec) {
*out = *in
out.RedisCommonSpec = in.RedisCommonSpec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RedisSentinelStopSpec.
func (in *RedisSentinelStopSpec) DeepCopy() *RedisSentinelStopSpec {
if in == nil {
return nil
}
out := new(RedisSentinelStopSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RemoteCluster) DeepCopyInto(out *RemoteCluster) {
*out = *in
......@@ -4404,6 +4895,36 @@ func (in *Timespec) DeepCopy() *Timespec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UserDefinedSpec) DeepCopyInto(out *UserDefinedSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UserDefinedSpec.
func (in *UserDefinedSpec) DeepCopy() *UserDefinedSpec {
if in == nil {
return nil
}
out := new(UserDefinedSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VMSpec) DeepCopyInto(out *VMSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VMSpec.
func (in *VMSpec) DeepCopy() *VMSpec {
if in == nil {
return nil
}
out := new(VMSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Workflow) DeepCopyInto(out *Workflow) {
*out = *in
......
......@@ -59,6 +59,8 @@ spec:
- network-partition
- network-dns
- network-bandwidth
- network-flood
- network-down
- process
- jvm-exception
- jvm-gc
......@@ -66,7 +68,24 @@ spec:
- jvm-return
- jvm-stress
- jvm-rule-data
- jvm-mysql
- clock
- redis-expiration
- redis-penetration
- redis-cacheLimit
- redis-restart
- redis-stop
- kafka-fill
- kafka-flood
- kafka-io
- file-create
- file-modify
- file-delete
- file-rename
- file-append
- file-replace
- vm
- user_defined
type: string
address:
description: 'DEPRECATED: Use Selector instead. Only one of Address
......@@ -145,6 +164,156 @@ spec:
duration:
description: Duration represents the duration of the chaos action
type: string
file-append:
properties:
count:
description: Count is the number of times to append the data.
type: integer
data:
description: Data is the data for append.
type: string
file-name:
description: FileName is the name of the file to be created, modified,
deleted, renamed, or appended.
type: string
type: object
file-create:
properties:
dir-name:
description: DirName is the directory name to create or delete.
type: string
file-name:
description: FileName is the name of the file to be created, modified,
deleted, renamed, or appended.
type: string
type: object
file-delete:
properties:
dir-name:
description: DirName is the directory name to create or delete.
type: string
file-name:
description: FileName is the name of the file to be created, modified,
deleted, renamed, or appended.
type: string
type: object
file-modify:
properties:
file-name:
description: FileName is the name of the file to be created, modified,
deleted, renamed, or appended.
type: string
privilege:
description: Privilege is the file privilege to be set.
format: int32
type: integer
type: object
file-rename:
properties:
dest-file:
description: DestFile is the name to be renamed.
type: string
source-file:
description: SourceFile is the name need to be renamed.
type: string
type: object
file-replace:
properties:
dest-string:
description: DestStr is the destination string of the file.
type: string
file-name:
description: FileName is the name of the file to be created, modified,
deleted, renamed, or appended.
type: string
line:
description: Line is the line number of the file to be replaced.
type: integer
origin-string:
description: OriginStr is the origin string of the file.
type: string
type: object
http-abort:
properties:
code:
description: Code is a rule to select target by http status code
in response
type: string
method:
description: HTTP method
type: string
path:
description: Match path of Uri with wildcard matches
type: string
port:
description: The TCP port that the target service listens on
format: int32
type: integer
proxy_ports:
description: Composed with one of the port of HTTP connection,
we will only attack HTTP connection with port inside proxy_ports
items:
type: integer
type: array
target:
description: 'HTTP target: Request or Response'
type: string
required:
- proxy_ports
- target
type: object
http-config:
properties:
file_path:
description: The config file path
type: string
type: object
http-delay:
properties:
code:
description: Code is a rule to select target by http status code
in response
type: string
delay:
description: Delay represents the delay of the target request/response
type: string
method:
description: HTTP method
type: string
path:
description: Match path of Uri with wildcard matches
type: string
port:
description: The TCP port that the target service listens on
format: int32
type: integer
proxy_ports:
description: Composed with one of the port of HTTP connection,
we will only attack HTTP connection with port inside proxy_ports
items:
type: integer
type: array
target:
description: 'HTTP target: Request or Response'
type: string
required:
- delay
- proxy_ports
- target
type: object
http-request:
description: used for HTTP request, now only support GET
properties:
count:
description: The number of requests to send
type: integer
enable-conn-pool:
description: Enable connection pool
type: boolean
url:
description: Request to send"
type: string
type: object
jvm-exception:
properties:
class:
......@@ -193,6 +362,41 @@ spec:
format: int32
type: integer
type: object
jvm-mysql:
properties:
database:
description: the match database default value is "", means match
all database
type: string
exception:
description: The exception which needs to throw for action `exception`
or the exception message needs to throw in action `mysql`
type: string
latency:
description: The latency duration for action 'latency' or the
latency duration in action `mysql`
type: integer
mysqlConnectorVersion:
description: the version of mysql-connector-java, only support
5.X.X(set to "5") and 8.X.X(set to "8") now
type: string
pid:
description: the pid of Java process which needs to attach
type: integer
port:
description: the port of agent server, default 9277
format: int32
type: integer
sqlType:
description: the match sql type default value is "", means match
all SQL type. The value can be 'select', 'insert', 'update',
'delete', 'replace'.
type: string
table:
description: the match table default value is "", means match
all table
type: string
type: object
jvm-return:
properties:
class:
......@@ -244,6 +448,73 @@ spec:
format: int32
type: integer
type: object
kafka-fill:
properties:
host:
description: The host of kafka server
type: string
maxBytes:
description: The max bytes to fill
format: int64
type: integer
messageSize:
description: The size of each message
type: integer
password:
description: The password of kafka client
type: string
port:
description: The port of kafka server
type: integer
reloadCommand:
description: The command to reload kafka config
type: string
topic:
description: The topic to attack
type: string
username:
description: The username of kafka client
type: string
type: object
kafka-flood:
properties:
host:
description: The host of kafka server
type: string
messageSize:
description: The size of each message
type: integer
password:
description: The password of kafka client
type: string
port:
description: The port of kafka server
type: integer
threads:
description: The number of worker threads
type: integer
topic:
description: The topic to attack
type: string
username:
description: The username of kafka client
type: string
type: object
kafka-io:
properties:
configFile:
description: The path of server config
type: string
nonReadable:
description: Make kafka cluster non-readable
type: boolean
nonWritable:
description: Make kafka cluster non-writable
type: boolean
topic:
description: The topic to attack
type: string
type: object
mode:
description: 'Mode defines the mode to run chaos action. Supported
mode: one / all / fixed / fixed-percent / random-max-percent'
......@@ -369,6 +640,16 @@ spec:
value
type: string
type: object
network-down:
properties:
device:
description: The network interface to impact
type: string
duration:
description: 'NIC down time, time units: ns, us (or µs), ms, s,
m, h.'
type: string
type: object
network-duplicate:
properties:
correlation:
......@@ -403,6 +684,29 @@ spec:
-p udp
type: string
type: object
network-flood:
properties:
duration:
description: The number of seconds to run the iperf test
type: string
ip-address:
description: Generate traffic to this IP address
type: string
parallel:
description: The number of iperf parallel client threads to run
format: int32
type: integer
port:
description: Generate traffic to this port on the IP address
type: string
rate:
description: The speed of network traffic, allows bps, kbps, mbps,
gbps, tbps unit. bps means bytes per second
type: string
required:
- duration
- rate
type: object
network-loss:
properties:
correlation:
......@@ -475,6 +779,88 @@ spec:
description: the signal number to send
type: integer
type: object
redis-cacheLimit:
properties:
addr:
description: The adress of Redis server
type: string
cacheSize:
description: The size of `maxmemory`
type: string
password:
description: The password of Redis server
type: string
percent:
description: Specifies maxmemory as a percentage of the original
value
type: string
type: object
redis-expiration:
properties:
addr:
description: The adress of Redis server
type: string
expiration:
description: The expiration of the keys
type: string
key:
description: The keys to be expired
type: string
option:
description: Additional options for `expiration`
type: string
password:
description: The password of Redis server
type: string
type: object
redis-penetration:
properties:
addr:
description: The adress of Redis server
type: string
password:
description: The password of Redis server
type: string
requestNum:
description: The number of requests to be sent
type: integer
type: object
redis-restart:
properties:
addr:
description: The adress of Redis server
type: string
conf:
description: The path of Sentinel conf
type: string
flushConfig:
description: The control flag determines whether to flush config
type: boolean
password:
description: The password of Redis server
type: string
redisPath:
description: The path of `redis-server` command-line tool
type: boolean
type: object
redis-stop:
properties:
addr:
description: The adress of Redis server
type: string
conf:
description: The path of Sentinel conf
type: string
flushConfig:
description: The control flag determines whether to flush config
type: boolean
password:
description: The password of Redis server
type: string
redisPath:
description: The path of `redis-server` command-line tool
type: boolean
type: object
selector:
description: Selector is used to select physical machines that are
used to inject chaos action.
......@@ -578,6 +964,15 @@ spec:
uid:
description: the experiment ID
type: string
user_defined:
properties:
attackCmd:
description: The command to be executed when attack
type: string
recoverCmd:
description: The command to be executed when recover
type: string
type: object
value:
description: Value is required when the mode is set to `FixedMode`
/ `FixedPercentMode` / `RandomMaxPercentMode`. If `FixedMode`, provide
......@@ -587,6 +982,12 @@ spec:
a number from 0-100 to specify the max percent of pods to do chaos
action
type: string
vm:
properties:
vm-name:
description: The name of the VM to be injected
type: string
type: object
required:
- action
- mode
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -73,7 +73,7 @@ func (impl *Impl) Apply(ctx context.Context, index int, records []*v1alpha1.Reco
}
// for example, physicalMachinechaos.Spec.Action is 'network-delay', action is 'network', subAction is 'delay'
// notice: 'process' and 'clock' action has no subAction, set subAction to ""
// notice: 'process', 'vm', 'clock' and 'user_defined' action has no subAction, set subAction to ""
actions := strings.SplitN(string(physicalMachineChaos.Spec.Action), "-", 2)
if len(actions) == 1 {
actions = append(actions, "")
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment