Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Daniel Yu 2017-04-25 10:19:57 +08:00
commit ab3d1bef68
83 changed files with 5042 additions and 43 deletions

View File

@ -3,6 +3,7 @@
version: "v0.6.2-rancher1"
description: "Rancher External DNS service powered by DigitalOcean"
minimum_rancher_version: v1.2.0-pre4-rc1
maximum_rancher_version: v1.4.99
questions:
- variable: "DO_PAT"
label: "DigitalOcean Personal Access Token"

View File

@ -0,0 +1,48 @@
## DigitalOcean DNS
Rancher External DNS service powered by DigitalOcean
#### Changelog
Initial version
#### Usage
##### DigitalOcean DNS record TTL
The DigitalOcean API currently does not support per-record TTL setting. You should configure the global TTL setting for the domain manually and set it to a low value (e.g. 60).
##### Limitation when running the service on multiple Rancher servers
When running multiple instances of the External DNS service configured to use the same domain name, then only one of them can run in the "Default" environment of a Rancher server instance.
##### Supported host labels
`io.rancher.host.external_dns_ip`
Override the IP address used in DNS records for containers running on the host. Defaults to the IP address the host is registered with in Rancher.
`io.rancher.host.external_dns`
Accepts 'true' (default) or 'false'
When this is set to 'false' no DNS records will ever be created for containers running on this host.
##### Supported service labels
`io.rancher.service.external_dns`
Accepts 'always', 'never' or 'auto' (default)
- `always`: Always create DNS records for this service
- `never`: Never create DNS records for this service
- `auto`: Create DNS records for this service if it exposes ports on the host
##### Custom DNS name template
By default DNS entries are named `<service>.<stack>.<environment>.<domain>`.
You can specify a custom name template used to construct the subdomain part (left of the domain/zone name) of the DNS records. The following placeholders are supported:
* `%{{service_name}}`
* `%{{stack_name}}`
* `%{{environment_name}}`
**Example:**
`%{{stack_name}}-%{{service_name}}.statictext`
Make sure to only use characters in static text and separators that your provider allows in DNS names.

View File

@ -0,0 +1,13 @@
digitalocean:
image: rancher/external-dns:v0.6.3
command: -provider=digitalocean
expose:
- 1000
environment:
DO_PAT: ${DO_PAT}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: 300
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"

View File

@ -0,0 +1,34 @@
.catalog:
name: "DigitalOcean DNS"
version: "v0.6.3"
description: "Rancher External DNS service powered by DigitalOcean"
minimum_rancher_version: v1.5.0
questions:
- variable: "DO_PAT"
label: "DigitalOcean Personal Access Token"
description: "Enter your personal access token"
type: "string"
required: true
- variable: "ROOT_DOMAIN"
label: "Domain Name"
description: "The domain name managed by DigitalOcean."
type: "string"
required: true
- variable: "NAME_TEMPLATE"
label: "DNS Name Template"
description: |
Name template used to construct the subdomain part (left of the domain) of the DNS record names.
Supported placeholders: %{{service_name}}, %{{stack_name}}, %{{environment_name}}.
By default DNS entries will be named '<service>.<stack>.<environment>.<domain>'.
type: "string"
default: "%{{service_name}}.%{{stack_name}}.%{{environment_name}}"
required: false
digitalocean:
health_check:
port: 1000
interval: 5000
unhealthy_threshold: 3
request_line: GET / HTTP/1.0
healthy_threshold: 2
response_timeout: 2000

View File

@ -1,7 +1,7 @@
name: DigitalOcean DNS
description: |
Rancher External DNS service powered by DigitalOcean
version: v0.6.2-rancher1
version: v0.6.3
category: External DNS
labels:
io.rancher.orchestration.supported: 'cattle,mesos,swarm,kubernetes'

View File

@ -0,0 +1,46 @@
**_Portainer_** is a lightweight management UI which allows you to **easily** manage your Docker host or Swarm cluster.
**_Portainer_** is meant to be as **simple** to deploy as it is to use. It consists of a single container that can run on any Docker engine (Docker for Linux and Docker for Windows are supported).
**_Portainer_** allows you to manage your Docker containers, images, volumes, networks and more ! It is compatible with the *standalone Docker* engine and with *Docker Swarm*.
## Getting started
Once you have deploy the stack you can access the Portainer UI at `http://<RANCHER SERVER>/r/projects/<PROJECT ID>/portainer/`.
For example
http://rancher-server:8080/r/projects/1a5/portainer/
Note, the trailing / is important in the URL
## Demo
<img src="http://portainer.io/images/screenshots/portainer.gif" width="77%"/>
You can try out the public demo instance: http://demo.portainer.io/ (login with the username **demo** and the password **tryportainer**).
Please note that the public demo cluster is **reset every 15min**.
## Getting help
* Documentation: https://portainer.readthedocs.io
* Issues: https://github.com/portainer/portainer/issues
* FAQ: https://portainer.readthedocs.io/en/latest/faq.html
* Gitter (chat): https://gitter.im/portainer/Lobby
* Slack: http://portainer.io/slack/
## Reporting bugs and contributing
* Want to report a bug or request a feature? Please open [an issue](https://github.com/portainer/portainer/issues/new).
* Want to help us build **_portainer_**? Follow our [contribution guidelines](https://portainer.readthedocs.io/en/latest/contribute.html) to build it locally and make a pull request. We need all the help we can get!
## Limitations
**_Portainer_** has full support for the following Docker versions:
* Docker 1.10 to Docker 1.12 (including `swarm-mode`)
* Docker Swarm >= 1.2.3
Partial support for the following Docker versions (some features may not be available):
* Docker 1.9

View File

@ -0,0 +1,15 @@
portainer:
labels:
io.rancher.sidekicks: ui
io.rancher.container.create_agent: true
io.rancher.container.agent.role: environment
image: rancher/portainer-agent:v0.1.0
volumes:
- /config
ui:
image: portainer/portainer:pr572
command: --no-auth --external-endpoints=/config/config.json --sync-interval=5s -p :80
volumes_from:
- portainer
net: container:portainer

View File

@ -0,0 +1,5 @@
.catalog:
name: "Portainer"
version: "1.11.4"
description: Open-source lightweight management UI for a Docker host or Swarm cluster
minimum_rancher_version: v1.5.0-rc1

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -0,0 +1,5 @@
name: portainer
description: |
Portainer is an open-source lightweight management UI which allows you to easily manage your Docker host or Swarm cluster
version: 1.11.4
category: Management

View File

@ -0,0 +1 @@
152fd64fb4936454c8eb95fa57450753

View File

@ -0,0 +1,3 @@
.catalog:
name: "interoutevdc"
version: "0.1.0"

View File

@ -0,0 +1 @@
https://myservices.interoute.com/rancher/component.js

View File

@ -0,0 +1 @@
https://github.com/Interoute/docker-machine-driver-interoutevdc/releases/download/v1.0/docker-machine-driver-interoutevdc_linux-amd64.tar.gz

View File

@ -0,0 +1,3 @@
.catalog:
name: "interoutevdc"
version: "0.2.0"

View File

@ -0,0 +1 @@
https://myservices.interoute.com/rancher/component.js

View File

@ -0,0 +1 @@
https://github.com/Interoute/docker-machine-driver-interoutevdc/releases/download/v1.1/docker-machine-driver-interoutevdc_linux-amd64.tar.gz

View File

@ -0,0 +1,84 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 21.0.2, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 67.2 69" style="enable-background:new 0 0 67.2 69;" xml:space="preserve">
<style type="text/css">
.st0{fill:#FFFFFF;}
.st1{fill:#242020;}
.st2{fill:#DE654D;}
.st3{fill:#FFCC4E;}
.st4{fill:#F79434;}
.st5{fill:#272424;}
.st6{fill:#272323;}
.st7{fill:#282424;}
.st8{fill:#2B2727;}
.st9{fill:#2B2627;}
.st10{fill:#2A2627;}
.st11{fill:#2D2A2A;}
.st12{fill:#2F2A2B;}
.st13{fill:#2F2B2B;}
.st14{fill:#2A2626;}
.st15{fill:#322E2E;}
.st16{fill:#262223;}
</style>
<rect class="st0" width="67.2" height="69"/>
<g>
<path class="st1" d="M59.8,37.4c-0.4,1.7-1,3.3-1.8,4.9c-1.7-1-3.4-1.3-5.3-0.9c-1.4,0.4-2.6,1.1-3.5,2.3c-1.7,2.1-2.2,5.7,0.3,8.6
c-1.4,1-2.9,1.9-4.5,2.6c-0.6-1.8-1.8-3.2-3.5-4c-1.3-0.6-2.6-0.8-4-0.6c-1.6,0.3-3,1.1-4.1,2.4C32.4,54,32,55.5,32,57.2
c-1,0-4.1-0.5-5.1-0.9c0.7-1.7,0.7-3.5-0.1-5.2c-0.6-1.4-1.6-2.4-2.9-3.1c-2.6-1.4-6.2-1-8.5,1.8c-1.2-1.2-2.3-2.5-3.4-4
c1.9-1,3.1-2.6,3.5-4.7c0.3-1.3,0.2-2.5-0.4-3.7c-1.5-3.2-4.1-4.4-7.6-4c-0.2-1.7-0.2-3.4,0-5.1c0.2,0,0.3,0,0.4,0
c3.3,0.5,6.6-1.5,7.5-4.9c0.7-2.8-0.5-5.7-3-7.2c-0.1-0.1-0.2-0.1-0.4-0.2c0.3-0.7,2.5-3.3,3.3-4c1.2,1.5,2.7,2.4,4.6,2.5
c1.5,0.2,2.9-0.2,4.2-1c2.4-1.5,3.9-4.7,2.5-8.1c0.7-0.4,4.3-1,5.1-0.9c0,4,2.8,6.3,5.4,6.7c3.2,0.6,6.5-1.1,7.7-4.5
c1.6,0.7,3.1,1.6,4.5,2.6c-1.3,1.4-1.9,3.1-1.7,5c0.1,1.5,0.7,2.8,1.8,4c2,2.1,5.4,2.9,8.4,1.1c0.4,0.4,1.6,3.6,1.8,4.9
c-2,0.3-3.7,1.3-4.8,3c-0.8,1.2-1.1,2.6-1,4C54.2,34.1,56.1,36.9,59.8,37.4z M34,39.3c1.5,3.7,5,4.7,7.6,4.2
c2.7-0.5,4.7-2.6,5.2-5.2c0.3-1.6,0.1-3.1-0.7-4.5c-0.8-1.4-2-2.4-3.5-2.9c0.1-0.1,0.2-0.1,0.3-0.1c3-1.2,4.6-4.2,4-7.3
c-0.7-3.6-4.2-5.9-7.7-5.2c-2.3,0.5-4,1.8-4.9,4c0,0.1-0.1,0.1-0.1,0.2c-1.1-3-4-4.6-6.8-4.3c-2.9,0.2-5.3,2.3-6,5.1
c-0.3,1.4-0.2,2.7,0.3,4c0.7,1.8,2.1,3,3.9,3.7c-3.5,1.4-4.8,4.6-4.3,7.3c0.5,2.8,2.7,5.1,5.7,5.5C30.2,44,32.9,42.2,34,39.3z"/>
<path class="st2" d="M38.7,8.2c-1.8,0-3.3-1.5-3.3-3.3c0-2,1.6-3.3,3.3-3.3c1.8,0,3.3,1.5,3.3,3.3C42.1,6.7,40.6,8.2,38.7,8.2z"/>
<path class="st3" d="M12.4,21.8c0,1.7-1.3,3.3-3.3,3.3c-2.1,0-3.3-1.7-3.3-3.3c0-1.8,1.5-3.3,3.3-3.3
C11.2,18.5,12.5,20.3,12.4,21.8z"/>
<path class="st4" d="M24,7.9c0,1.8-1.5,3.3-3.3,3.3c-1.8,0-3.3-1.5-3.3-3.3c0-1.8,1.5-3.3,3.3-3.3C22.6,4.6,24,6.1,24,7.9z"/>
<path class="st5" d="M50.5,60.5c0.1,0,0.1,0,0.2,0c1,0,1.9,0,2.9,0c0.4,0,0.7,0.1,1.1,0.2c0.8,0.3,1.4,0.8,1.6,1.7
c0.2,0.8,0.2,1.6,0,2.4c-0.3,1.1-1,1.9-2.4,2c-1.1,0.1-2.2,0.1-3.3,0.1c0,0,0,0-0.1,0C50.5,64.7,50.5,62.6,50.5,60.5z M52.3,65.4
c0.4,0,0.8,0,1.2-0.1c0.5-0.1,0.9-0.4,1-0.9c0.1-0.5,0.1-0.9,0-1.4c-0.1-0.5-0.4-0.8-0.9-0.9c-0.4-0.1-0.9-0.1-1.3-0.1
C52.3,63.1,52.3,64.2,52.3,65.4z"/>
<path class="st6" d="M47.8,66.8c-0.7,0-1.3,0-2,0c-0.8-2.1-1.6-4.2-2.4-6.4c0.7,0,1.4,0,2.1,0c0.4,1.4,0.8,2.7,1.2,4.1
c0,0,0.1,0,0.1,0c0.4-1.4,0.8-2.7,1.2-4.1c0.7,0,1.3,0,2.1,0C49.4,62.6,48.6,64.7,47.8,66.8z"/>
<path class="st7" d="M62.4,61.3c-0.3,0.4-0.5,0.8-0.8,1.3c-0.2-0.1-0.4-0.2-0.7-0.3c-0.4-0.2-0.7-0.3-1.1-0.3
c-0.5,0-0.9,0.2-1.1,0.7c-0.3,0.7-0.3,1.4,0,2c0.2,0.5,0.7,0.8,1.2,0.7c0.5-0.1,1.1-0.4,1.6-0.6c0.2,0.4,0.5,0.8,0.8,1.2
c-0.6,0.3-1.2,0.6-1.8,0.8c-0.6,0.1-1.1,0.2-1.7,0.1c-0.9-0.2-1.5-0.7-1.8-1.5c-0.4-1.1-0.4-2.3,0-3.4c0.4-1.1,1.4-1.6,2.6-1.6
C60.5,60.4,61.5,60.7,62.4,61.3z"/>
<path class="st8" d="M20.4,64.6c-1.1,0-2.2,0-3.3,0c-0.1,1,0.6,1.7,1.6,1.5c0.4-0.1,0.7-0.2,1-0.4c0.1,0,0.2-0.1,0.4-0.2
c0.1,0.2,0.3,0.4,0.4,0.6c-0.7,0.4-1.4,0.7-2.1,0.8c-1,0.1-2.1-0.4-2.3-2c-0.1-0.5-0.1-1,0.1-1.4c0.3-1.1,1.2-1.6,2.3-1.6
c0.2,0,0.4,0,0.6,0.1c0.7,0.2,1.1,0.6,1.3,1.3C20.5,63.7,20.5,64.1,20.4,64.6z M19.5,64c0-0.9-0.4-1.4-1.1-1.4
c-0.8,0-1.3,0.6-1.3,1.4C17.9,64,18.7,64,19.5,64z"/>
<path class="st9" d="M29.2,64.3c0,0.2,0,0.4-0.1,0.7c-0.2,1.1-1,1.8-2.1,1.8c-0.3,0-0.7,0-1-0.1c-0.7-0.2-1.2-0.6-1.4-1.3
c-0.2-0.7-0.2-1.5,0-2.2c0.3-0.8,0.8-1.2,1.7-1.3c0.4-0.1,0.8-0.1,1.2,0c0.9,0.2,1.5,0.9,1.6,1.9C29.2,64,29.2,64.1,29.2,64.3z
M28.3,64.3c-0.1-0.3-0.1-0.6-0.2-1c-0.2-0.5-0.6-0.8-1.2-0.8c-0.6,0-1,0.3-1.2,0.8c-0.2,0.6-0.2,1.3,0,1.9
c0.2,0.5,0.6,0.8,1.2,0.8c0.6,0,1-0.3,1.2-0.8C28.2,65,28.2,64.7,28.3,64.3z"/>
<path class="st10" d="M43.2,66.1c-0.9,0.5-1.7,0.9-2.7,0.7c-0.8-0.1-1.3-0.6-1.6-1.4c-0.2-0.6-0.2-1.3-0.1-2
c0.3-1.2,1.2-1.6,2.4-1.6c0.2,0,0.3,0,0.5,0.1c0.8,0.2,1.2,0.7,1.4,1.5c0.1,0.3,0.1,0.8,0,1.2c-1.1,0-2.2,0-3.3,0
c0,0.4,0.1,0.7,0.3,1c0.3,0.5,0.8,0.6,1.4,0.5c0.3-0.1,0.6-0.2,1-0.4c0.1-0.1,0.3-0.1,0.4-0.2C42.9,65.7,43,65.8,43.2,66.1z
M39.7,63.9c0.8,0,1.6,0,2.4,0c0-0.9-0.3-1.4-1-1.4C40.3,62.5,39.7,63.1,39.7,63.9z"/>
<path class="st11" d="M33.4,66.7c0-0.2,0-0.4,0-0.5c-0.3,0.2-0.5,0.3-0.8,0.4c-0.5,0.2-1,0.3-1.5,0.2c-0.5-0.1-0.8-0.4-0.9-0.8
c0-0.2-0.1-0.3-0.1-0.5c0-1.1,0-2.2,0-3.3c0,0,0-0.1,0-0.1c0.3,0,0.6,0,0.9,0c0,0.1,0,0.2,0,0.4c0,0.9,0,1.8,0,2.7
c0,0.1,0,0.3,0,0.4c0.1,0.5,0.3,0.7,0.8,0.6c0.5-0.1,1-0.3,1.4-0.6c0.1-0.1,0.1-0.2,0.1-0.3c0-0.9,0-1.9,0-2.8c0-0.1,0-0.2,0-0.3
c0.3,0,0.6,0,1,0c0,1.6,0,3.1,0,4.7C34.1,66.7,33.8,66.7,33.4,66.7z"/>
<path class="st12" d="M10.8,66.7c0-0.1,0-0.3,0-0.4c0-0.9,0-1.8,0-2.7c0-0.8-0.4-1.1-1.2-0.9c-0.2,0-0.3,0.1-0.5,0.2
c-0.7,0.3-0.7,0.3-0.7,1.2c0,0.8,0,1.5,0,2.3c0,0.1,0,0.3,0,0.4c-0.3,0-0.6,0-1,0c0-1.6,0-3.2,0-4.8c0.3,0,0.6,0,1,0
c0,0.2,0,0.3,0,0.6c0.1-0.1,0.3-0.2,0.4-0.2c0.6-0.3,1.1-0.5,1.8-0.4c0.7,0.1,1.1,0.5,1.1,1.2c0,0.2,0,0.4,0,0.7c0,1,0,2,0,3
C11.5,66.7,11.1,66.7,10.8,66.7z"/>
<path class="st13" d="M35.8,60.7c0.3-0.1,0.6-0.1,1-0.2c0,0.5,0,0.9,0,1.4c0.5,0,0.9,0,1.3,0c0,0.2,0,0.5,0,0.7c-0.4,0-0.8,0-1.3,0
c0,0.1,0,0.2,0,0.2c0,0.8,0,1.7,0,2.5c0,0.1,0,0.1,0,0.2c0,0.3,0.1,0.4,0.4,0.4c0.3,0,0.6-0.1,0.9-0.2c0.1,0.2,0.2,0.4,0.3,0.7
c-0.7,0.3-1.3,0.5-2,0.3c-0.3-0.1-0.5-0.4-0.6-0.7c0-0.2-0.1-0.5-0.1-0.7c0-0.7,0-1.5,0-2.2c0-0.1,0-0.3,0-0.4c-0.2,0-0.4,0-0.6,0
c0-0.2,0-0.5,0-0.7c0.2,0,0.4,0,0.6,0C35.8,61.5,35.8,61.1,35.8,60.7z"/>
<path class="st14" d="M15.4,65.8c0.1,0.2,0.2,0.4,0.3,0.7c-0.6,0.2-1.1,0.5-1.8,0.3c-0.4-0.1-0.7-0.4-0.8-0.8
c0-0.2-0.1-0.4-0.1-0.6c0-0.8,0-1.5,0-2.3c0-0.1,0-0.3,0-0.4c-0.2,0-0.4,0-0.6,0c0-0.2,0-0.5,0-0.7c0.2,0,0.4,0,0.6,0
c0-0.4,0-0.8,0-1.2c0.3-0.1,0.6-0.1,1-0.2c0,0.5,0,0.9,0,1.4c0.5,0,0.9,0,1.3,0c0,0.3,0,0.5,0,0.7c-0.4,0-0.8,0-1.3,0
c0,0.1,0,0.2,0,0.3c0,0.8,0,1.6,0,2.3c0,0.1,0,0.1,0,0.2c0,0.4,0.2,0.6,0.6,0.5C14.9,65.9,15.2,65.8,15.4,65.8z"/>
<path class="st15" d="M21.4,61.9c0.3,0,0.6,0,0.9,0c0,0.3,0,0.5,0,0.8c0.5-0.6,1-0.9,1.8-0.8c0,0.3,0,0.6,0,0.9
c-0.1,0-0.2,0-0.3,0.1c-0.9,0.1-1.5,0.8-1.5,1.7c0,0.6,0,1.2,0,1.8c0,0.1,0,0.3,0,0.4c-0.3,0-0.6,0-1,0
C21.4,65.1,21.4,63.5,21.4,61.9z"/>
<path class="st16" d="M5.2,60.3c0.3,0,0.6,0,0.9,0c0,2.1,0,4.2,0,6.4c-0.3,0-0.6,0-0.9,0C5.2,64.6,5.2,62.5,5.2,60.3z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.8 KiB

View File

@ -0,0 +1,2 @@
name: interoutevdc
version: "0.2.0"

View File

@ -0,0 +1,3 @@
.catalog:
name: "oneandone"
version: "v1.1.1"

View File

@ -0,0 +1 @@
https://1and1.github.io/ui-driver-oneandone/1.0.0/component.js

View File

@ -0,0 +1 @@
https://github.com/1and1/docker-machine-driver-oneandone/releases/download/v1.1.1/docker-machine-driver-oneandone-linux-amd64-v1.1.1.tar.gz

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,2 @@
name: oneandone
version: "v1.1.1"

View File

@ -0,0 +1,3 @@
.catalog:
name: "profitbricks"
version: "v1.2.3"

View File

@ -0,0 +1 @@
https://profitbricks.github.io/ui-driver-profitbricks/docs/1.1.0/component.js

View File

@ -0,0 +1 @@
https://github.com/profitbricks/docker-machine-driver-profitbricks/releases/download/v1.2.3/docker-machine-driver-profitbricks-v1.2.3-linux-amd64.tar.gz

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,2 @@
name: profitbricks
version: "v1.2.3"

View File

@ -0,0 +1,46 @@
# Mesos-dns (Experimental)
### Info
Add mesos-dns component to your mesos orchestrator, to be able that docker
### Usage
Mesos-dns will be listening at link_local_ip and will forward dns queries to rancherDNS.
To deploy marathon tasks, you need to set network=HOST and set dns=link_local_ip
Marathon json template example
```
{
"id": "NAME",
"cmd": null,
"cpus": 1,
"mem": 128,
"disk": 0,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": “DOCKER_IMAGE",
"network": "HOST",
"privileged": false,
"parameters": [
{
"key": "dns",
"value": "169.254.169.251"
}
],
"forcePullImage": false
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"labels": {}
}
]
}
```

View File

@ -0,0 +1,34 @@
version: '2'
services:
mesos-dns:
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: mesos-dns-route
tty: true
image: rawmind/alpine-mesos-dns:0.6.0-3
cap_add:
- NET_ADMIN
external_links:
- mesos/zookeeper:zookeeper
- mesos/mesos-master:master
environment:
- MESOS_ZK=zk://zookeeper.mesos:2181/mesos
- MESOS_MASTER="master.mesos:5050"
- MESOS_DNS_DOMAIN=${mesos_domain}
- MESOS_DNS_RESOLVERS="169.254.169.250"
- LINK_LOCAL_IP=${mesos_localip}
mesos-dns-route:
labels:
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: 'true'
tty: true
network_mode: host
image: rawmind/alpine-link-local:0.1-2
cap_add:
- NET_ADMIN
environment:
- DESTINATION_IP=${mesos_localip}
- BRIDGE=${mesos_bridge}

View File

@ -0,0 +1,32 @@
.catalog:
name: mesos-dns
version: v0.6.0-rancher1
description: |
(Experimental) Mesos-dns.
minimum_rancher_version: v0.59.0
maintainer: "Raul Sanchez <rawmind@gmail.com>"
uuid: mesos-dns-0
questions:
- variable: "mesos_domain"
description: "Mesos domain."
label: "Mesos domain:"
required: true
default: "mesos"
type: "string"
- variable: "mesos_localip"
description: "Link Local Ip."
label: "Mesos LLI:"
required: true
default: "169.254.169.251"
type: "string"
- variable: "mesos_bridge"
description: "Bridge."
label: "Mesos bridge:"
required: true
default: "docker0"
type: "string"
mesos-dns:
retain_ip: true

View File

@ -0,0 +1,36 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 19.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="-323 267 56 64" style="enable-background:new -323 267 56 64;" xml:space="preserve">
<style type="text/css">
.st0{fill:#00435A;}
.st1{fill:#00AEDE;}
.st2{fill:#394D54;}
</style>
<g>
<path class="st0" d="M-281.5,306v-13.8l-11.9,6.9L-281.5,306"/>
<path class="st1" d="M-282.2,290.8l-11.9-6.9v13.8L-282.2,290.8"/>
<path class="st1" d="M-295.7,297.7v-13.8l-12,6.9L-295.7,297.7"/>
<path class="st1" d="M-296.5,282.6l-11.9-6.9v13.9L-296.5,282.6"/>
<path class="st0" d="M-282.2,307.3l-11.9-6.9v13.8L-282.2,307.3"/>
<path class="st0" d="M-295.7,314.3v-13.8l-12,6.9L-295.7,314.3"/>
<path class="st1" d="M-267.9,299.1l-12-6.9V306L-267.9,299.1"/>
<path class="st1" d="M-310,289.5v-13.9l-12,6.9L-310,289.5"/>
<path class="st0" d="M-296.5,315.6l-11.9-6.9v13.8L-296.5,315.6"/>
<path class="st1" d="M-310,322.5v-13.8l-12,6.9L-310,322.5"/>
<path class="st0" d="M-296.5,299.1l-11.9-6.9V306L-296.5,299.1"/>
<path class="st1" d="M-281.5,289.5v-13.9l-11.9,6.9L-281.5,289.5"/>
<path class="st0" d="M-294.2,267.4v13.8l11.9-6.9L-294.2,267.4"/>
<path class="st0" d="M-307.7,274.3l12,6.9v-13.8L-307.7,274.3"/>
<path class="st0" d="M-282.2,323.8l-11.9-6.9v13.8L-282.2,323.8"/>
<path class="st0" d="M-295.7,330.8v-13.8l-12,6.9L-295.7,330.8"/>
<path class="st1" d="M-267.2,314.3v-13.8l-11.9,6.9L-267.2,314.3"/>
<path class="st1" d="M-310,306v-13.9l-12,6.9L-310,306"/>
<path class="st1" d="M-310.8,307.3l-12.1-6.9v13.8L-310.8,307.3"/>
<path class="st1" d="M-267.2,297.7v-13.8l-11.9,6.9L-267.2,297.7"/>
<path class="st1" d="M-267.9,282.6l-12-6.9v13.9L-267.9,282.6"/>
<path class="st1" d="M-267.9,315.6l-12-6.9v13.8L-267.9,315.6"/>
<path class="st0" d="M-281.5,322.5v-13.8l-11.9,6.9L-281.5,322.5"/>
<path class="st1" d="M-310.8,290.8l-12.1-6.9v13.8L-310.8,290.8"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

@ -0,0 +1,8 @@
name: Mesos-dns
description: |
(Experimental) Mesos-dns
version: v0.6.0-rancher1
category: External DNS
maintainer: "Raul Sanchez <rawmind@gmail.com>"
license:
projectURL: https://github.com/rawmind0/alpine-mesos-dns

View File

@ -2,7 +2,7 @@ mongo-cluster:
restart: always
environment:
MONGO_SERVICE_NAME: mongo-cluster
tty: true
CATTLE_SCRIPT_DEBUG: ${debug}
entrypoint: /opt/rancher/bin/entrypoint.sh
command:
- --replSet
@ -17,11 +17,10 @@ mongo-cluster:
mongo-base:
restart: always
net: none
tty: true
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rancher/mongodb-conf:v0.1.0
image: rancher/mongodb-conf:v0.1.1
stdin_open: true
entrypoint: /bin/true
mongo-datavolume:

View File

@ -11,6 +11,12 @@
type: "string"
required: true
default: "rs0"
- variable: debug
description: "Enable Debug log for Mongo containers"
label: "Debug"
type: "string"
required: false
default: ""
mongo-cluster:
scale: 3
retain_ip: true

View File

@ -2,7 +2,7 @@ mongo-cluster:
restart: always
environment:
MONGO_SERVICE_NAME: mongo-cluster
tty: true
CATTLE_SCRIPT_DEBUG: ${debug}
entrypoint: /opt/rancher/bin/entrypoint.sh
command:
- --replSet
@ -17,11 +17,10 @@ mongo-cluster:
mongo-base:
restart: always
net: none
tty: true
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rancher/mongodb-conf:v0.1.0
image: rancher/mongodb-conf:v0.1.1
stdin_open: true
entrypoint: /bin/true
mongo-datavolume:

View File

@ -11,6 +11,12 @@
type: "string"
required: true
default: "rs0"
- variable: debug
description: "Enable Debug log for Mongo containers"
label: "Debug"
type: "string"
required: false
default: ""
mongo-cluster:
scale: 3
retain_ip: true

View File

@ -2,7 +2,7 @@ mongo-cluster:
restart: always
environment:
MONGO_SERVICE_NAME: mongo-cluster
tty: true
CATTLE_SCRIPT_DEBUG: ${debug}
entrypoint: /opt/rancher/bin/entrypoint.sh
command:
- --replSet
@ -18,12 +18,11 @@ mongo-cluster:
mongo-base:
restart: always
net: none
tty: true
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rancher/mongodb-conf:v0.1.0
image: rancher/mongodb-conf:v0.1.1
stdin_open: true
entrypoint: /bin/true
mongo-datavolume:

View File

@ -18,6 +18,12 @@
Example: 'database'
required: false
type: "string"
- variable: debug
description: "Enable Debug log for Mongo containers"
label: "Debug"
type: "string"
required: false
default: ""
mongo-cluster:
scale: 3
retain_ip: true

View File

@ -0,0 +1,26 @@
# Prometheus
### Info:
This template deploys a collection of monitoring services based upon the technologies listed below, once deployed you should have a monitoring platform capable of querying a wide variety of metrics that represent your environment, also included are somehandy pre-configured dashboards to get you started.
In this catalog item, the following technologies are utilised to make this as useful as possible;
* [Prometheus](https://github.com/prometheus/prometheus) - Used to scrape and store metrics from our data sources.
* [Prometheus Node Exporter](https://github.com/prometheus/node_exporter) - Gets host level metrics and exposes them to Prometheus.
* [cAdvisor](https://github.com/google/cadvisor) - Deploys and Exposes the cadvsior stats used by Rancher's agent container, to Prometheus.
* [Grafana](https://github.com/grafana/grafana/) - Used to visualise the data from Prometheus and InfluxDB.
* [Prometheus Rancher Exporter](https://github.com/infinityworksltd/prometheus-rancher-exporter/) - Allows Prometheus to access the Rancher API and return the status of any stack or service in the rancher environment associated with the API key used.
The full compliment of metrics from the Rancher server itsself are now available for graphing directly in Prometheus, this is easily enabled with an environment variable. For those interested, I've documented the steps [here].(https://github.com/infinityworksltd/Guide_Rancher_Monitoring)
All components in this stack are open source tools available in the community. All this template does is to bound them together in an easy to use package. I expect most people who find this useful will make use of this as a starting point and develop it further around their own needs.
## Deployment:
1. Select Prometheus from the community catalog.
2. Enter the IP Address of your Rancher server (used for accessing Ranchers own metrics, optional)
3. Click deploy.
## Usage
* Grafana will now be available on, running on port 3000. I've added a number of dashboards to help get you started. Authentication is with the default `admin/admin`.
* Prometheus will now be available, running on port 9090. Have a play around with some of the data. For more information on Prometheus, check out their [documentation](https://prometheus.io/docs/introduction/overview/).

View File

@ -0,0 +1,72 @@
cadvisor:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: google/cadvisor:latest
stdin_open: true
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
node-exporter:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: prom/node-exporter:latest
stdin_open: true
prom-conf:
tty: true
image: infinityworks/prom-conf:19
volumes:
- /etc/prom-conf/
net: none
prometheus:
tty: true
image: prom/prometheus:v1.6.0
command: -alertmanager.url=http://alertmanager:9093 -config.file=/etc/prom-conf/prometheus.yml -storage.local.path=/prometheus -web.console.libraries=/etc/prometheus/console_libraries -web.console.templates=/etc/prometheus/consoles
ports:
- 9090:9090
labels:
io.rancher.sidekicks: prom-conf
volumes_from:
- prom-conf
volumes:
- /data/
links:
- cadvisor:cadvisor
- node-exporter:node-exporter
- prometheus-rancher-exporter:prometheus-rancher-exporter
extra_hosts:
- "rancher-server:${RANCHER_SERVER}"
graf-db:
tty: true
image: infinityworks/graf-db:11
command: cat
volumes:
- /var/lib/grafana/
net: none
grafana:
tty: true
image: grafana/grafana:4.2.0
ports:
- 3000:3000
labels:
io.rancher.sidekicks: graf-db
volumes_from:
- graf-db
links:
- prometheus:prometheus
- prometheus-rancher-exporter:prometheus-rancher-exporter
prometheus-rancher-exporter:
tty: true
labels:
io.rancher.container.create_agent: true
io.rancher.container.agent.role: environment
image: infinityworks/prometheus-rancher-exporter:v0.22.52

View File

@ -0,0 +1,43 @@
.catalog:
name: "Prometheus"
version: "3.0.0"
description: "Prometheus Monitoring Solution"
uuid: prometheus-2
minimum_rancher_version: v1.5.5
questions:
- variable: "RANCHER_SERVER"
label: "Rancher Server"
description: "IP Address of the rancher server, no HTTP or slashes. This is only required for users that have enabled metrics to be exported by Rancher"
default: "0.0.0.0"
required: false
type: "string"
prometheus:
scale: 1
health_check:
port: 9090
interval: 5000
unhealthy_threshold: 3
request_line: ''
healthy_threshold: 2
response_timeout: 5000
grafana:
scale: 1
health_check:
port: 3000
interval: 5000
unhealthy_threshold: 3
request_line: ''
healthy_threshold: 2
response_timeout: 5000
prometheus-rancher-exporter:
scale: 1
health_check:
port: 9173
interval: 5000
unhealthy_threshold: 3
request_line: ''
healthy_threshold: 2
response_timeout: 5000

View File

@ -1,5 +1,5 @@
name: Prometheus
description: |
Prometheus and friends, auto-discovering monitoring solution for Rancher deployments.
version: 2.1.0
version: 3.0.0
category: Monitoring

View File

@ -3,6 +3,7 @@
version: "11.0.563-rancher1"
description: "Datadog Agent and DogStatsD"
minimum_rancher_version: v0.46.0
maximum_rancher_version: v1.1.99
questions:
- variable: "api_key"
label: "DataDog Api Key"

View File

@ -3,6 +3,7 @@
version: "11.0.570-rancher1"
description: "Real-time performance tracking and visualization of your container-based application deployment"
minimum_rancher_version: v0.46.0
maximum_rancher_version: v1.1.99
questions:
- variable: "api_key"
label: "DataDog Api Key"

View File

@ -3,6 +3,7 @@
version: 11.0.580-rancher1
description: Real-time performance tracking and visualization of your container-based application deployment
minimum_rancher_version: v0.46.0
maximum_rancher_version: v1.1.99
questions:
- variable: api_key
label: Datadog Api Key

View File

@ -3,6 +3,7 @@
version: "11.1.580-rancher1"
description: "Real-time performance tracking and visualization of your container-based application deployment"
minimum_rancher_version: v0.46.0
maximum_rancher_version: v1.1.99
questions:
- variable: "api_key"
label: "DataDog Api Key"

View File

@ -3,6 +3,7 @@
version: "11.3.585-rancher1"
description: "Real-time performance tracking and visualization of your container-based application deployment"
minimum_rancher_version: v0.46.0
maximum_rancher_version: v1.1.99
questions:
- variable: "api_key"
label: "DataDog Api Key"

View File

@ -0,0 +1,25 @@
# Datadog agent
This template deploys a [Datadog](https://www.datadoghq.com/) agent stack consisting of the official [docker-dd-agent](https://www.github.com/Datadog/docker-dd-agent) image and a configuration sidekick that provides closer integration with Rancher:
* Hosts in Datadog are named correctly
* Host labels can be exported as Datadog host tags
* Service labels can be exported as Datadog metric tags
## Service Discovery
**Note**: Service discovery templates that contain the `%%host%%` placeholder are currently not working in Rancher 1.2 and up due to the switch to CNI networking.
Please refer to the Datadog documentation [here](http://docs.datadoghq.com/guides/servicediscovery/) to learn how to provide configuration templates for Service Discovery in etcd or Consul.
## Changelog
**1.1.0-11.0.5110**
New versioning scheme: `<TemplateVersion>-<DatadogImageVersion>`
* DEPRECATED: DogStatsd standalone mode
* NEW: Configure global host tags
* NEW: Configure custom log verbosity
* NEW: Enable collection of AWS EC2 custom tags
* NEW: Support for Amazon Linux AMIs
* NEW: Enable Datadog trace agent

View File

@ -0,0 +1,47 @@
datadog-init:
image: janeczku/datadog-rancher-init:v2.2.4
net: none
command: /bin/true
volumes:
- /opt/rancher
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.pull_image: always
datadog-agent:
image: datadog/docker-dd-agent:11.0.5110
entrypoint: /opt/rancher/entrypoint-wrapper.py
command:
- supervisord
- -n
- -c
- /etc/dd-agent/supervisor.conf
restart: always
environment:
# Evaluated by datadog-agent image
API_KEY: ${api_key}
SD_BACKEND_HOST: ${sd_backend_host}
SD_BACKEND_PORT: ${sd_backend_port}
SD_TEMPLATE_DIR: ${sd_template_dir}
STATSD_METRIC_NAMESPACE: ${statsd_namespace}
DD_APM_ENABLED: ${dd_apm_enabled}
EC2_TAGS: ${dd_ec2_tags}
DD_LOG_LEVEL: ${dd_log_level}
# Evaluated by datadog-init script
DD_HOST_LABELS: ${host_labels}
DD_CONTAINER_LABELS: ${service_labels}
DD_SERVICE_DISCOVERY: ${service_discovery}
DD_SD_CONFIG_BACKEND: ${sd_config_backend}
DD_CONSUL_TOKEN: ${dd_consul_token}
DD_CONSUL_SCHEME: ${dd_consul_scheme}
DD_CONSUL_VERIFY: ${dd_consul_verify}
DD_METADATA_HOSTNAME: rancher-metadata
TAGS: ${host_tags}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- ${cgroups_location}:/host/sys/fs/cgroup:ro
volumes_from:
- datadog-init
labels:
io.rancher.scheduler.global: "${global_service}"
io.rancher.sidekicks: 'datadog-init'

View File

@ -0,0 +1,142 @@
.catalog:
name: "Datadog"
version: "1.1.0-11.0.5110"
description: "Real-time performance tracking and visualization of your container-based application deployment"
minimum_rancher_version: v1.2.0
questions:
- variable: "api_key"
label: "Datadog API Key"
description: |
Enter your Datadog API key.
required: true
type: "string"
- variable: "global_service"
label: "Global Service"
description: |
Enable this option to run a Datadog agent container on every host in the environment.
required: true
type: "boolean"
default: true
- variable: "host_labels"
label: "Export Host Labels as Tags"
description: |
Comma delimited list of host labels to export as Datadog host tags, e.g. 'region,zone'.
required: false
type: "string"
- variable: "service_labels"
label: "Export Service Labels as Tags"
description: |
Comma delimited list of service labels to export as Datadog metric tags.
'io.rancher.stack.name' and 'io.rancher.stack_service.name' are exported by default.
required: false
type: "string"
- variable: "host_tags"
label: "Global Host Tags"
description: |
Comma delimited list of host tags to apply to metrics, e.g. 'simple-tag-0,tag-key-1:tag-value-1'.
required: false
type: "string"
- variable: "dd_ec2_tags"
label: "Collect AWS EC2 Tags"
description: |
Collect AWS EC2 custom tags as agent tags (requires an IAM role associated with the instance).
required: true
type: "boolean"
default: false
- variable: cgroups_location
label: Cgroup directory location
description: |
Set this to '/cgroups/' if your hosts are running Amazon Linux AMIs.
required: true
type: enum
default: '/sys/fs/cgroup/'
options:
- '/sys/fs/cgroup/'
- '/cgroups/'
- variable: "dd_apm_enabled"
label: "Enable APM agent"
description: |
Enable the Datadog trace-agent along with the infrastructure agent, allowing the container to accept traces on 8126/tcp.
required: true
type: "boolean"
default: false
- variable: "service_discovery"
label: "Enable Service Discovery"
description: |
Collect metrics from supported applications running in Docker containers.
required: true
type: "boolean"
default: false
- variable: "sd_config_backend"
label: Service Discovery Configuration Backend
description: |
Choose a key/value store to use for looking up application configuration templates.
If none is provided only auto config templates will be used.
required: true
type: enum
default: none
options:
- none
- etcd
- consul
- variable: "sd_backend_host"
label: "Configuration Backend Host"
description: |
IP address or DNS name to use to connect to the configuration backend.
required: false
type: "string"
- variable: "sd_backend_port"
label: "Configuration Backend Port"
description: |
Port to use to connect to the configuration backend.
required: false
type: "int"
- variable: "sd_template_dir"
label: "Configuration Backend Template Path"
description: |
Specify a custom path where the agent should look for configuration templates in the backend.
The default is '/datadog/check_configs'.
required: false
type: "string"
- variable: "dd_consul_scheme"
label: "Consul Connection Scheme"
description: |
Scheme to use for requests to a Consul backend.
required: false
type: enum
default: http
options:
- http
- https
- variable: "dd_consul_verify"
label: "Verify Consul SSL Certificate"
description: |
Whether to verify the SSL certificate for HTTPS requests to a Consul backend.
required: false
type: "boolean"
default: true
- variable: "dd_consul_token"
label: "Consul ACL Token"
description: |
If the Consul backend uses ACL, specify a token granting read access to the configuration templates.
required: false
type: "string"
- variable: "statsd_namespace"
label: "StatsD Metric Namespace"
description: |
Optional namespace for aggregated StatsD metrics.
required: false
type: "string"
- variable: "dd_log_level"
label: "Agent log level"
description: |
Set the logging verbosity of the Datadog agent.
required: false
type: enum
default: INFO
options:
- CRITICAL
- ERROR
- WARNING
- INFO
- DEBUG

View File

@ -1,7 +1,7 @@
name: Datadog
description: |
Real-time performance tracking and visualization of your container-based application deployment
version: 11.3.585-rancher1
version: 1.1.0-11.0.5110
category: Monitoring
maintainer: "Jan Bruder <jan@rancher.com>"
license: The MIT License

View File

@ -3,6 +3,7 @@
version: "v1.0.0"
description: "Updates credentials for ECR in Rancher"
uuid: ecr-1
maximum_rancher_version: "v1.4.99"
questions:
- variable: "aws_access_key_id"
label: "AWS Access Key ID"

View File

@ -3,6 +3,7 @@
version: "v1.0.1"
description: "Updates credentials for ECR in Rancher"
uuid: ecr-2
maximum_rancher_version: "v1.4.99"
questions:
- variable: "aws_access_key_id"
label: "AWS Access Key ID"

View File

@ -3,6 +3,7 @@
version: "v1.1.0"
description: "Updates credentials for ECR in Rancher"
uuid: ecr-3
maximum_rancher_version: "v1.4.99"
questions:
- variable: "aws_access_key_id"
label: "AWS Access Key ID"

View File

@ -0,0 +1,35 @@
# GoCD.io
### Info:
This template creates one GoCD server and scale out the number of GoCD agent you need.
The GoCD agent is link with docker engine container as sidekick, so the idea is to not create GoCD agent per language but use docker container to build and test your stuff.
You can use on GoCD agent:
- docker cli
- docker-compose cli
- rancher-compose cli
- make
### Usage:
Select GoCD from catalog.
Choose if you should deploy GoCD Server, or GoCD Agent or the two.
Enter the number of GoCD agent you need.
Choose the key to autoregister GoCD agent.
Click deploy.
GoCD server can now be accessed over the Rancher network on port `8153` (http://IP_CONTAINER:8153). To access from external Rancher network, you need to set load balancer or expose the port 8153.
### Source, bugs and enhances
If you found bugs or need enhance, you can open ticket on github:
- [GoCD official core project](https://github.com/gocd/gocd)
- [GoCD Server docker image](https://github.com/disaster37/alpine-gocd-server)
- [GoCD Agent docker image](https://github.com/disaster37/alpine-gocd-agent)
- [Rancher Cattle metadata docker image](https://github.com/disaster37/rancher-cattle-metadata)

View File

@ -0,0 +1,125 @@
version: '2'
services:
{{- if eq .Values.DEPLOY_SERVER "true"}}
gocd-server:
tty: true
image: webcenter/alpine-gocd-server:17.3.0-1
volumes:
{{- if (contains .Values.VOLUME_DRIVER_SERVER "/")}}
- ${VOLUME_DRIVER_SERVER}:/data
{{- else}}
- gocd-server-data:/data
{{- end}}
environment:
- GOCD_CONFIG_memory=${GOCD_SERVER_MEMORY}
- GOCD_CONFIG_agent-key=${GOCD_AGENT_KEY}
- GOCD_CONFIG_server-url=${GOCD_SERVER_URL}
- GOCD_USER_${GOCD_USER}=${GOCD_PASSWORD}
- CONFD_BACKEND=${CONFD_BACKEND}
- CONFD_NODES=${CONFD_NODES}
- CONFD_PREFIX_KEY=${CONFD_PREFIX}
{{- if eq .Values.GOCD_AGENT_PACKAGE "true"}}
- GOCD_PLUGIN_script-executor=https://github.com/gocd-contrib/script-executor-task/releases/download/0.3/script-executor-0.3.0.jar
- GOCD_PLUGIN_docker-task=https://github.com/manojlds/gocd-docker/releases/download/0.1.27/docker-task-assembly-0.1.27.jar
- GOCD_PLUGIN_slack=https://github.com/Vincit/gocd-slack-task/releases/download/v1.3.1/gocd-slack-task-1.3.1.jar
- GOCD_PLUGIN_docker-pipline=https://github.com/Haufe-Lexware/gocd-plugins/releases/download/v1.0.0-beta/gocd-docker-pipeline-plugin-1.0.0.jar
- GOCD_PLUGIN_email-notifier=https://github.com/gocd-contrib/email-notifier/releases/download/v0.1/email-notifier-0.1.jar
- GOCD_PLUGIN_github-notifier=https://github.com/gocd-contrib/gocd-build-status-notifier/releases/download/1.3/github-pr-status-1.3.jar
- GOCD_PLUGIN_github-scm=https://github.com/ashwanthkumar/gocd-build-github-pull-requests/releases/download/v1.3.3/github-pr-poller-1.3.3.jar
- GOCD_PLUGIN_maven-repository=https://github.com/1and1/go-maven-poller/releases/download/v1.1.4/go-maven-poller.jar
- GOCD_PLUGIN_maven-task=https://github.com/ruckc/gocd-maven-plugin/releases/download/0.1.1/gocd-maven-plugin-0.1.1.jar
- GOCD_PLUGIN_s3-fetch=https://github.com/indix/gocd-s3-artifacts/releases/download/v2.0.2/s3fetch-assembly-2.0.2.jar
- GOCD_PLUGIN_s3-publish=https://github.com/indix/gocd-s3-artifacts/releases/download/v2.0.2/s3publish-assembly-2.0.2.jar
- GOCD_PLUGIN_nessus-scan=https://github.com/Haufe-Lexware/gocd-plugins/releases/download/v1.0.0-beta/gocd-nessus-scan-plugin-1.0.0.jar
- GOCD_PLUGIN_sonar=https://github.com/Haufe-Lexware/gocd-plugins/releases/download/v1.0.0-beta/gocd-sonar-qualitygates-plugin-1.0.0.jar
- GOCD_PLUGIN_gitlab-auth=https://github.com/gocd-contrib/gocd-oauth-login/releases/download/v2.3/gitlab-oauth-login-2.3.jar
- GOCD_PLUGIN_google-auth=https://github.com/gocd-contrib/gocd-oauth-login/releases/download/v2.3/google-oauth-login-2.3.jar
- GOCD_PLUGIN_github-auth=https://github.com/gocd-contrib/gocd-oauth-login/releases/download/v2.3/github-oauth-login-2.3.jar
{{- end}}
{{- if (ne .Values.DEPLOY_LB "true") and .Values.PUBLISH_PORT}}
ports:
- ${PUBLISH_PORT}:8153
{{- end}}
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
{{- if eq .Values.DEPLOY_LB "true"}}
lb:
image: rancher/lb-service-haproxy:v0.6.2
{{- if .Values.PUBLISH_PORT}}
ports:
- ${PUBLISH_PORT}:8153/tcp
{{- else}}
expose:
- 8153:8153/tcp
{{- end}}
links:
- gocd-server:gocd-server
labels:
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: 'true'
{{- end}}
{{- end}}
{{- if eq .Values.DEPLOY_AGENT "true"}}
gocd-agent:
tty: true
image: webcenter/alpine-gocd-agent:17.3.0-1
volumes:
{{- if (contains .Values.VOLUME_DRIVER_AGENT "/")}}
- ${VOLUME_DRIVER_AGENT}:/data
{{- else}}
- gocd-agent-data:/data
{{- end}}
- gocd-scheduler-setting:/opt/scheduler
environment:
- GOCD_CONFIG_memory=${GOCD_AGENT_MEMORY}
- GOCD_CONFIG_agent_key=${GOCD_AGENT_KEY}
- GOCD_CONFIG_agent_resource_docker=${GOCD_AGENT_RESOURCE}
- DOCKER_HOST=docker-engine:2375
{{- if eq .Values.DEPLOY_SERVER "true"}}
links:
- gocd-server:gocd-server
{{- end}}
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.agent.role: environment
io.rancher.container.create_agent: 'true'
io.rancher.sidekicks: rancher-cattle-metadata,docker-engine
rancher-cattle-metadata:
network_mode: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: "true"
image: webcenter/rancher-cattle-metadata:1.0.1
volumes:
- gocd-scheduler-setting:/opt/scheduler
docker-engine:
privileged: true
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
image: index.docker.io/docker:1.13-dind
volumes:
{{- if (contains .Values.VOLUME_DRIVER_AGENT "/")}}
- ${VOLUME_DRIVER_AGENT}:/data
{{- else}}
- gocd-agent-data:/data
{{- end}}
{{- end}}
volumes:
gocd-scheduler-setting:
driver: local
per_container: true
{{- if not (contains .Values.VOLUME_DRIVER_AGENT "/")}}
gocd-agent-data:
driver: ${VOLUME_DRIVER_AGENT}
per_container: true
{{- end}}
{{- if not (contains .Values.VOLUME_DRIVER_SERVER "/")}}
gocd-server-data:
driver: ${VOLUME_DRIVER_SERVER}
{{- end}}

View File

@ -0,0 +1,155 @@
version: '2'
catalog:
name: GoCD
version: 17.3.0-rancher1
minimum_rancher_version: v1.5.0
questions:
- variable: "DEPLOY_SERVER"
description: "Deploy GoCD server"
label: "Deploy GoCD server"
required: true
type: enum
default: "true"
options:
- "true"
- "false"
- variable: "DEPLOY_AGENT"
description: "Deploy GoCD agent"
label: "Deploy GoCD agent"
required: true
type: enum
default: "true"
options:
- "true"
- "false"
- variable: "GOCD_AGENT_SCALE"
description: "Number of GoCD agent"
label: "GoCD Agents"
required: true
default: 1
type: "string"
- variable: "GOCD_AGENT_KEY"
description: "Key to use for auto registration agent"
label: "Agent key"
required: true
type: "password"
- variable: "GOCD_SERVER_MEMORY"
description: "Max memory allowed to GoCD server"
label: "Max memory for server"
type: "string"
required: true
default: "1024m"
- variable: "GOCD_AGENT_MEMORY"
description: "Max memory allowed to GoCD agent"
label: "Max memory for agent"
type: "string"
required: true
default: "2048m"
- variable: "GOCD_AGENT_RESOURCE"
description: "Resource name associated for agent"
label: "Resource name"
type: "string"
required: true
default: "docker"
- variable: "GOCD_USER"
description: "Login to connect on GoCD"
label: "Login"
type: "string"
required: true
default: "admin"
- variable: "GOCD_PASSWORD"
description: "Password to connect on GoCD"
label: "Password"
type: "password"
required: true
- variable: "GOCD_AGENT_PACKAGE"
description: "Install GoCD extra plugins"
label: "Install extra plugins"
required: true
type: enum
default: "true"
options:
- "true"
- "false"
- variable: "VOLUME_DRIVER_SERVER"
description: "Docker driver to store volume or base path for GoCD server"
label: "Volume drver / Path for server"
type: "string"
required: true
default: "local"
- variable: "VOLUME_DRIVER_AGENT"
description: "Docker driver to store volume or base path for GoCD agent"
label: "Volume drver / Path for agent"
type: "string"
required: true
default: "local"
- variable: "DEPLOY_LB"
description: "Deploy Loadbalancer"
label: "Deploy Loadbalancer"
required: true
type: enum
default: "true"
options:
- "true"
- "false"
- variable: "PUBLISH_PORT"
description: "Set port if you want publish external port for GoCD server or Loadbalancer"
label: "Publish port"
required: false
type: "string"
default: "8153"
- variable: "GOCD_SERVER_URL"
description: "The server URL use by agent to auto register. Don't touch if you deploy server and agent"
label: "Server URL"
required: true
type: "string"
default: "https://gocd-server:8154/go"
- variable: "CONFD_BACKEND"
description: "The confd backend to grab config"
label: "Confd backend"
required: true
default: "env"
type: "string"
- variable: "CONFD_NODES"
description: "The confd nodes"
label: "Confd nodes"
required: false
type: "string"
- variable: "CONFD_PREFIX"
description: "The confd prefix"
label: "Confd prefix"
required: true
default: "/gocd"
type: "string"
services:
gocd-agent:
scale: ${GOCD_AGENT_SCALE}
retain_ip: true
gocd-server:
scale: 1
retain_ip: false
health_check:
port: 8153
interval: 5000
unhealthy_threshold: 3
request_line: ''
healthy_threshold: 2
response_timeout: 5000
lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- priority: 1
protocol: http
service: gocd-server
source_port: 8153
target_port: 8153
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
interval: 2000

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -0,0 +1,8 @@
name: GoCD
description: |
GoCD Stack (server and agents)
version: 17.3.0-rancher1
category: Continuous Integration
maintainer: "Sebastien Langoureaux <linuxworkgroup@gmail.com>"
license: Apache License
projectURL: https://www.gocd.io/

View File

@ -19,7 +19,7 @@
type: "int"
- variable: ssh_port
description: "ssh port to access gogs cli"
label: "Ssh Port"
label: "SSH Port"
required: true
default: "222"
type: "int"
@ -27,7 +27,7 @@
description: "mysql root password"
label: "Mysql Password"
required: true
default: "password"
type: "string"
default: ""
type: "password"
gogs:

View File

@ -94,7 +94,7 @@
- variable: AWS_SECRET_KEY
label: AWS Route53 Secret Access Key
description: Enter the Secret Access Key for your AWS account.
type: string
type: password
required: false
- variable: CLOUDFLARE_EMAIL
label: CloudFlare Email Address
@ -104,12 +104,12 @@
- variable: CLOUDFLARE_KEY
label: CloudFlare API Key
description: Enter the Global API Key for your CloudFlare account.
type: string
type: password
required: false
- variable: DO_ACCESS_TOKEN
label: DigitalOcean API Access Token
description: Enter the Personal Access Token for your DigitalOcean account.
type: string
type: password
required: false
- variable: DNSIMPLE_EMAIL
label: DNSimple Email Address
@ -119,7 +119,7 @@
- variable: DNSIMPLE_KEY
label: DNSimple API Key
description: Enter your DNSimple API key.
type: string
type: password
required: false
- variable: DYN_CUSTOMER_NAME
label: Dyn Customer Name
@ -134,12 +134,12 @@
- variable: DYN_PASSWORD
label: Dyn Password
description: Enter your Dyn password.
type: string
type: password
required: false
- variable: GANDI_API_KEY
label: Gandi API Key
description: Enter the API key for your Gandi account.
type: string
type: password
required: false
- variable: OVH_APPLICATION_KEY
label: OVH Application Key
@ -149,15 +149,15 @@
- variable: OVH_APPLICATION_SECRET
label: OVH Application Secret
description: Enter your OVH application secret.
type: string
type: password
required: false
- variable: OVH_CONSUMER_KEY
label: OVH Consumer Key
description: Enter your OVH consumer key.
type: string
type: password
required: false
- variable: VULTR_API_KEY
label: Vultr API Key
description: Enter the API key for your Vultr account.
type: string
type: password
required: false

View File

@ -0,0 +1,49 @@
# Minio.io
### Info:
This template creates, scale in and scale out a multinodes minio cluster on top of Rancher. The configuration is generated with confd from Rancher metadata.
Cluster size is static after deployement. It's mean that you should redeploy the stack if you should change the size of your cluster (minio.io limitation).
### Usage:
Select Minio Cloud Storage from catalog.
Enter the number of nodes for your minio cluster and set the key and secret to connect in minio.
Click deploy.
Minio can now be accessed over the Rancher network on port `9000` (http://IP_CONTAINER:9000). To access from external Rancher network, you need to set load balancer or expose the port 9000.
### Disks / nodes
You can set many disks per nodes (max of 4). If you use local disk (no extra Docker driver), you need to mount them on the same `base path` and indicate this name on `Volume Driver / Path` section.
Moreover, you need to use the same disk name with a number as suffix (from 0 to 4) and report this on `Disk base name` section.
For exemple, if you should to use 4 disks per nodes:
- Number of disks per node: 4
- Volume drver / Path: /data/minio
- Disk base name: disk
And you have to mount the following partition:
- /data/minio/disk0
- /data/minio/disk1
- /data/minio/disk2
- /data/minio/disk3
-
To more info about nodes and disks, you can read the [official documentation](https://github.com/minio/minio/tree/master/docs/distributed)
### Advance info
1. This template create first the container called `rancher-cattle-metadata`. It embedded confd, with some scripts to get many settings from Cattle scheduler and expose them through the volume.
2. Then, the template create `minio` container. It will launch the scripts provided from `rancher-cattle-metadata` container with `volumes_from`. it will create /opt/scheduler/conf/scheduler.cfg file with some usefull infos about container, service, stack and host. Next, it will source `/opt/scheduler/conf/scheduler.cfg` and launch confd scripts to configure minio.
### Source, bugs and enhances
If you found bugs or need enhance, you can open ticket on github:
- [Minio official core project](https://github.com/minio/minio)
- [Minio docker image](https://github.com/disaster37/alpine-minio)
- [Rancher Cattle metadata docker image](https://github.com/disaster37/rancher-cattle-metadata)

View File

@ -0,0 +1,70 @@
version: '2'
services:
minio-server:
tty: true
image: webcenter/alpine-minio:2017-03-16_4
volumes:
- minio-scheduler-setting:/opt/scheduler
{{- if contains .Values.VOLUME_DRIVER "/" }}
{{- range $idx, $e := atoi .Values.MINIO_DISKS | until }}
- {{.Values.VOLUME_DRIVER}}/{{.Values.DISK_BASE_NAME}}{{$idx}}:/data/disk{{$idx}}
{{- end}}
{{- else}}
{{- range $idx, $e := atoi .Values.MINIO_DISKS | until }}
- minio-data-{{$idx}}:/data/disk{{$idx}}
{{- end}}
{{- end}}
environment:
- MINIO_CONFIG_minio.access.key=${MINIO_ACCESS_KEY}
- MINIO_CONFIG_minio.secret.key=${MINIO_SECRET_KEY}
- CONFD_BACKEND=${CONFD_BACKEND}
- CONFD_NODES=${CONFD_NODES}
- CONFD_PREFIX_KEY=${CONFD_PREFIX}
{{- range $idx, $e := atoi .Values.MINIO_DISKS | until }}
- MINIO_DISKS_{{$idx}}=disk{{$idx}}
{{- end}}
{{- if (ne .Values.DEPLOY_LB "true") and .Values.PUBLISH_PORT}}
ports:
- ${PUBLISH_PORT}:9000
{{- end}}
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: rancher-cattle-metadata
rancher-cattle-metadata:
network_mode: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: "true"
image: webcenter/rancher-cattle-metadata:1.0.1
volumes:
- minio-scheduler-setting:/opt/scheduler
{{- if eq .Values.DEPLOY_LB "true"}}
lb:
image: rancher/lb-service-haproxy:v0.6.2
{{- if .Values.PUBLISH_PORT}}
ports:
- ${PUBLISH_PORT}:9000/tcp
{{- else}}
expose:
- 9000:9000/tcp
{{- end}}
links:
- minio-server:minio-server
labels:
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: 'true'
{{- end}}
volumes:
minio-scheduler-setting:
driver: local
per_container: true
{{- if not (contains .Values.VOLUME_DRIVER "/")}}
{{- range $idx, $e := atoi .Values.MINIO_DISKS | until }}
minio-data-{{$idx}}:
per_container: true
driver: ${VOLUME_DRIVER}
{{- end}}
{{- end}}

View File

@ -0,0 +1,114 @@
version: '2'
catalog:
name: Minio
version: 2017-03-16-rancher1
minimum_rancher_version: v1.5.0
questions:
- variable: "MINIO_SCALE"
description: "Number of minio nodes."
label: "Minio Nodes"
required: true
default: 1
type: enum
options:
- 1
- 4
- 6
- 8
- 10
- 12
- 14
- 16
- variable: "MINIO_DISKS"
description: "Number of disks per node"
label: "Disks Per Node"
required: true
type: enum
default: 1
options:
- 1
- 2
- 4
- variable: "DISK_BASE_NAME"
description: "The base name for each disk"
label: "Disk base name"
type: "string"
required: true
default: "disk"
- variable: "VOLUME_DRIVER"
description: "Docker driver to store volume or base path for each disks"
label: "Volume drver / Path"
type: "string"
required: true
default: "local"
- variable: "MINIO_ACCESS_KEY"
description: "The key to connect on minio"
label: "Minio key"
required: true
type: "string"
- variable: "MINIO_SECRET_KEY"
description: "The secret key to connect on minio"
label: "Minio secret key"
required: true
type: "password"
- variable: "DEPLOY_LB"
description: "Deploy Loadbalancer"
label: "Deploy Loadbalancer"
required: true
type: enum
default: "true"
options:
- "true"
- "false"
- variable: "PUBLISH_PORT"
description: "Set port if you want publish external port for minio or Loadbalancer"
label: "Publish port"
required: false
type: "string"
default: "9000"
- variable: "CONFD_BACKEND"
description: "The confd backend to grab config"
label: "Confd backend"
required: true
default: "env"
type: "string"
- variable: "CONFD_NODES"
description: "The confd nodes"
label: "Confd nodes"
required: false
type: "string"
- variable: "CONFD_PREFIX"
description: "The confd prefix"
label: "Confd prefix"
required: true
default: "/minio"
type: "string"
services:
minio-server:
scale: ${MINIO_SCALE}
retain_ip: true
health_check:
port: 9000
interval: 5000
unhealthy_threshold: 3
request_line: ''
healthy_threshold: 2
response_timeout: 5000
lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- priority: 1
protocol: http
service: minio-server
source_port: 9000
target_port: 9000
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
interval: 2000

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 261 KiB

View File

@ -0,0 +1,8 @@
name: Minio Cloud Storage
description: |
Store photos, videos, VMs, containers, log files, or any blob of data as objects.
version: 2017-03-16-rancher1
category: Storage
maintainer: "Sebastien Langoureaux <linuxworkgroup@gmail.com>"
license: Apache License
projectURL: https://minio.io/

View File

@ -0,0 +1,18 @@
# NeuVector
### Info:
NeuVector provides continuous network security for application containers.
Deploy the NeuVector containers to protect running containers from violations, threats, and vulnerabilities. NeuVector also detects host and container privilege escalations / break outs.
NeuVector can be deployed on greenfield or brownfield (already running) application environments.
### Usage:
Contact <a style="color:red;font-weight:bold" href="mailto:info@neuvector.com?Subject=Rancher%20Catalog" target="_top">info@neuvector.com</a> with your Docker Hub Id so we can add you to our private registry.
After we confirm that you have been added, you can select the NeuVector catalog to deploy the Allinone and Enforcer containers.
The Manager default port is 8443 using HTTPS for logging in to the console.
The default username is admin and password is admin. After successful login, the admin user should update the account with a more secure password.

View File

@ -0,0 +1,35 @@
allinone:
image: neuvector/allinone:1.1.0
container_name: neuvector.allinone
restart: always
privileged: true
environment:
- affinity:com.myself.name!=neuvector
- CLUSTER_JOIN_ADDR=allinone
ports:
- 8443:8443
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc:/host/proc:ro
- /sys/fs/cgroup:/host/cgroup:ro
labels:
com.myself.name: "neuvector"
io.rancher.scheduler.affinity:host_label: ${NV_ALLINONE_LABEL}
io.rancher.container.hostname_override: container_name
enforcer:
image: neuvector/enforcer:1.1.0
container_name: neuvector.enforcer
restart: always
privileged: true
environment:
- affinity:com.myself.name!=neuvector
- CLUSTER_JOIN_ADDR=allinone
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc:/host/proc:ro
- /sys/fs/cgroup/:/host/cgroup/:ro
labels:
com.myself.name: "neuvector"
io.rancher.scheduler.global: true
io.rancher.scheduler.affinity:host_label_ne: ${NV_ALLINONE_LABEL}
io.rancher.container.hostname_override: container_name

View File

@ -0,0 +1,11 @@
.catalog:
name: "NeuVector"
version: "v1.1.0"
description: "Container Security Solution"
questions:
- variable: "NV_ALLINONE_LABEL"
label: "Allinone Host label"
description: "Specify a host label here that can be used to deploy the NeuVector AllInOne container, the NeuVector enforcer container will be deployed on any other hosts. Eg: neuvector.allinone_node=true (you could then add the label 'neuvector.allinone_node=true' to one host to use as management node)."
type: "string"
default: "neuvector.allinone_node=true"
required: true

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

View File

@ -0,0 +1,6 @@
name: NeuVector
description: |
Container Application Security
version: v1.1.0
category: Security
maintainer: neuvector support <support@neuvector.com>

View File

@ -47,7 +47,7 @@ nuxeo:
# Response timeout is measured in milliseconds
response_timeout: 2000
elasticsearch:
elasticsearch-masters:
metadata:
elasticsearch:
yml:

View File

@ -1,9 +1,15 @@
# [1.1.2-GA Documentation](http://docs.portworx.com)
# [1.1.6-GA Documentation](http://docs.portworx.com)
This catalog will spin up Portworx on your hosts.
There are 2 configuration variables required:
1. **cluster_id**: Arbitrary Cluster ID, common to all nodes in PX cluster. (Can use https://www.uuidgenerator.net for example)
2. **kvdb**: A Key-value database that is accessible to all nodes in the PX cluster. (Ex: etcd://10.0.0.42:4001)
3. **header_dir**: The directory where kernel headers can be found. Default is "/usr/src". For CoreOS use "/lib/modules"
4. **use_disks**: The list of devices to use as part of the cluster fabric. (Ex: '-a' for all disks, or '-s /dev/sdX' for each individual disk)
**NOTE**: px-dev requires at least one non-root disk be attached to the running image (i.e local disk or iscsi).
**NOTE**: If using Docker prior to 1.12, then you **MUST** remove 'MOUNT=shared' from the docker.service file and restart the docker service.
For detailed documentation, please visit [docs.portworx.com](http://docs.portworx.com)

View File

@ -3,7 +3,7 @@ portworx:
io.rancher.container.create_agent: 'true'
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: 'always'
image: portworx/px-dev
image: portworx/px-dev:edge
container_name: px
ipc: host
net: host
@ -11,13 +11,15 @@ portworx:
environment:
CLUSTER_ID: ${cluster_id}
KVDB: ${kvdb}
HDR_DIR: ${header_dir}
USE_DISKS: ${use_disks}
volumes:
- /dev:/dev
- /usr/src:/usr/src
- ${header_dir}:${header_dir}
- /run/docker/plugins:/run/docker/plugins
- /var/lib/osd:/var/lib/osd:shared
- /etc/pwx:/etc/pwx
- /opt/pwx/bin:/export_bin:shared
- /var/run/docker.sock:/var/run/docker.sock
- /var/cores:/var/cores
command: -c ${cluster_id} -k ${kvdb} -a -z -f
command: -c ${cluster_id} -k ${kvdb} ${use_disks}

View File

@ -1,8 +1,8 @@
.catalog:
name: "Portworx"
version: "1.1.2-2017-01-06-GA"
version: "1.1.6-2017-02-08-GA"
description: "Container Defined Storage for Docker"
uuid: 352669-pwx-1.1.2
uuid: 352669-pwx-1.1.6
minimum_rancher_version: v0.56.0
questions:
- variable: cluster_id
@ -17,3 +17,15 @@
type: "string"
required: true
default: ""
- variable: use_disks
description: "Cmdline args for disks to use. Ex: '-a' for all available, or '-s /dev/sdX' for each individual disk"
label: "Use Disks"
type: "string"
required: true
default: "-s /dev/xvdb"
- variable: header_dir
description: "Directory where kernel headers can be found. Default is '/usr/src'. For CoreOS use '/lib/modules'"
label: "Headers Directory"
type: "string"
required: true
default: "/usr/src"

View File

@ -1,5 +1,5 @@
name: px-dev
description: |
Software defined enterprise storage for Linux Containers.
version: 1.1.2-2017-01-06-GA
version: 1.1.6-2017-02-08-GA
category: Storage

View File

@ -4,4 +4,4 @@
## Info
* Easy setup with all needed data: `database_name`, `user`, `password`
* Load Balancer used to forroward Postgress port for the external services.
* Load Balancer used to forward Postgres port for the external services.

View File

@ -33,7 +33,9 @@ Traefik labels has to be added in your services, in order to get included in tra
- false: the service will not be published
- traefik.priority = <priority> # Override for frontend priority. 5 by default
- traefik.protocol = < http | https > # Override the default http protocol
- traefik.alias = < alias > # Alternate names to route rule. Multiple values separated by ",". WARNING: You could have collisions BE CAREFULL
- traefik.sticky = < true | false > # Enable/disable sticky sessions to the backend
- traefik.alias = < alias > # Alternate names to route rule. Multiple values separated by ",". traefik.domain is appended. WARNING: You could have collisions BE CAREFULL
- traefik.alias.fqdn = < alias fqdn > # Alternate names to route rule. Multiple values separated by ",". traefik.domain must be defined but is not appended here.
- traefik.domain = < domain.name > # Domain names to route rules. Multiple domains separated by ","
- traefik.domain.regexp = < domain.regexp > # Domain name regexp rule. Multiple domains separated by ","
- traefik.port = < port > # Port to expose throught traefik

View File

@ -33,7 +33,7 @@ traefik-conf:
io.rancher.container.start_once: 'true'
tty: true
log_opt: {}
image: rawmind/rancher-traefik:0.3.4-18
image: rawmind/rancher-traefik:0.3.4-19
net: none
volumes:
- /opt/tools

View File

@ -23,7 +23,7 @@ zammad-scheduler:
start_on_create: true
zammad-railsserver:
scale: 1
tart_on_create: true
start_on_create: true
zammad-websocket:
scale: 1
start_on_create: true

View File

@ -41,7 +41,7 @@ https://docs.zammad.org/en/latest/api-intro.html
https://zammad.org/participate
Thanks! ❤️ ❤️ ❤️
Thanks!
Your Zammad Team