first commit
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This commit is contained in:
commit
2c9c5f005b
85
CONTRIBUTING.rst
Normal file
85
CONTRIBUTING.rst
Normal file
@ -0,0 +1,85 @@
|
||||
plugins
|
||||
########
|
||||
:tags: openstack, cloud, ansible
|
||||
:category: \*nix
|
||||
|
||||
contributor guidelines
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Filing Bugs
|
||||
-----------
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub: "https://bugs.launchpad.net/openstack-ansible"
|
||||
|
||||
|
||||
When submitting a bug, or working on a bug, please ensure the following criteria are met:
|
||||
* The description clearly states or describes the original problem or root cause of the problem.
|
||||
* Include historical information on how the problem was identified.
|
||||
* Any relevant logs are included.
|
||||
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
|
||||
* Steps to reproduce the problem if possible.
|
||||
|
||||
|
||||
Submitting Code
|
||||
---------------
|
||||
|
||||
Changes to the project should be submitted for review via the Gerrit tool, following
|
||||
the workflow documented at: "http://docs.openstack.org/infra/manual/developers.html#development-workflow"
|
||||
|
||||
Pull requests submitted through GitHub will be ignored and closed without regard.
|
||||
|
||||
|
||||
Extra
|
||||
-----
|
||||
|
||||
Tags:
|
||||
If it's a bug that needs fixing in a branch in addition to Master, add a '\<release\>-backport-potential' tag (eg ``juno-backport-potential``). There are predefined tags that will autocomplete.
|
||||
|
||||
Status:
|
||||
Please leave this alone, it should be New till someone triages the issue.
|
||||
|
||||
Importance:
|
||||
Should only be touched if it is a Blocker/Gating issue. If it is, please set to High, and only use Critical if you have found a bug that can take down whole infrastructures.
|
||||
|
||||
|
||||
Style guide
|
||||
-----------
|
||||
|
||||
When creating tasks and other roles for use in Ansible please create then using the YAML dictionary format.
|
||||
|
||||
Example YAML dictionary format:
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: The name of the tasks
|
||||
module_name:
|
||||
thing1: "some-stuff"
|
||||
thing2: "some-other-stuff"
|
||||
tags:
|
||||
- some-tag
|
||||
- some-other-tag
|
||||
|
||||
|
||||
Example **NOT** in YAML dictionary format:
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: The name of the tasks
|
||||
module_name: thing1="some-stuff" thing2="some-other-stuff"
|
||||
tags:
|
||||
- some-tag
|
||||
- some-other-tag
|
||||
|
||||
|
||||
Usage of the ">" and "|" operators should be limited to Ansible conditionals and command modules such as the ansible ``shell`` module.
|
||||
|
||||
|
||||
Issues
|
||||
------
|
||||
|
||||
When submitting an issue, or working on an issue please ensure the following criteria are met:
|
||||
* The description clearly states or describes the original problem or root cause of the problem.
|
||||
* Include historical information on how the problem was identified.
|
||||
* Any relevant logs are included.
|
||||
* If the issue is a bug that needs fixing in a branch other than Master, add the ‘backport potential’ tag TO THE ISSUE (not the PR).
|
||||
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
|
||||
* If the issue is needed for a hotfix release, add the 'expedite' label.
|
||||
* Steps to reproduce the problem if possible.
|
202
LICENSE
Normal file
202
LICENSE
Normal file
@ -0,0 +1,202 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
6
README.rst
Normal file
6
README.rst
Normal file
@ -0,0 +1,6 @@
|
||||
plugins collection
|
||||
##################
|
||||
:tags: openstack, cloud, ansible, plugins
|
||||
:category: \*nix
|
||||
|
||||
Plugins used to power OpenStack-Ansible and our various roles.
|
240
actions/config_template.py
Normal file
240
actions/config_template.py
Normal file
@ -0,0 +1,240 @@
|
||||
# Copyright 2015, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import ConfigParser
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import yaml
|
||||
|
||||
from ansible import errors
|
||||
from ansible.runner.return_data import ReturnData
|
||||
from ansible import utils
|
||||
from ansible.utils import template
|
||||
|
||||
|
||||
CONFIG_TYPES = {
|
||||
'ini': 'return_config_overrides_ini',
|
||||
'json': 'return_config_overrides_json',
|
||||
'yaml': 'return_config_overrides_yaml'
|
||||
}
|
||||
|
||||
|
||||
class ActionModule(object):
|
||||
TRANSFERS_FILES = True
|
||||
|
||||
def __init__(self, runner):
|
||||
self.runner = runner
|
||||
|
||||
def grab_options(self, complex_args, module_args):
|
||||
"""Grab passed options from Ansible complex and module args.
|
||||
|
||||
:param complex_args: ``dict``
|
||||
:param module_args: ``dict``
|
||||
:returns: ``dict``
|
||||
"""
|
||||
options = dict()
|
||||
if complex_args:
|
||||
options.update(complex_args)
|
||||
|
||||
options.update(utils.parse_kv(module_args))
|
||||
return options
|
||||
|
||||
@staticmethod
|
||||
def return_config_overrides_ini(config_overrides, resultant):
|
||||
"""Returns string value from a modified config file.
|
||||
|
||||
:param config_overrides: ``dict``
|
||||
:param resultant: ``str`` || ``unicode``
|
||||
:returns: ``str``
|
||||
"""
|
||||
config = ConfigParser.RawConfigParser(allow_no_value=True)
|
||||
config_object = io.BytesIO(resultant.encode('utf-8'))
|
||||
config.readfp(config_object)
|
||||
for section, items in config_overrides.items():
|
||||
# If the items value is not a dictionary it is assumed that the
|
||||
# value is a default item for this config type.
|
||||
if not isinstance(items, dict):
|
||||
config.set('DEFAULT', str(section), str(items))
|
||||
else:
|
||||
# Attempt to add a section to the config file passing if
|
||||
# an error is raised that is related to the section
|
||||
# already existing.
|
||||
try:
|
||||
config.add_section(str(section))
|
||||
except (ConfigParser.DuplicateSectionError, ValueError):
|
||||
pass
|
||||
for key, value in items.items():
|
||||
config.set(str(section), str(key), str(value))
|
||||
else:
|
||||
config_object.close()
|
||||
|
||||
resultant_bytesio = io.BytesIO()
|
||||
try:
|
||||
config.write(resultant_bytesio)
|
||||
return resultant_bytesio.getvalue()
|
||||
finally:
|
||||
resultant_bytesio.close()
|
||||
|
||||
def return_config_overrides_json(self, config_overrides, resultant):
|
||||
"""Returns config json
|
||||
|
||||
Its important to note that file ordering will not be preserved as the
|
||||
information within the json file will be sorted by keys.
|
||||
|
||||
:param config_overrides: ``dict``
|
||||
:param resultant: ``str`` || ``unicode``
|
||||
:returns: ``str``
|
||||
"""
|
||||
original_resultant = json.loads(resultant)
|
||||
merged_resultant = self._merge_dict(
|
||||
base_items=original_resultant,
|
||||
new_items=config_overrides
|
||||
)
|
||||
return json.dumps(
|
||||
merged_resultant,
|
||||
indent=4,
|
||||
sort_keys=True
|
||||
)
|
||||
|
||||
def return_config_overrides_yaml(self, config_overrides, resultant):
|
||||
"""Return config yaml.
|
||||
|
||||
:param config_overrides: ``dict``
|
||||
:param resultant: ``str`` || ``unicode``
|
||||
:returns: ``str``
|
||||
"""
|
||||
original_resultant = yaml.safe_load(resultant)
|
||||
merged_resultant = self._merge_dict(
|
||||
base_items=original_resultant,
|
||||
new_items=config_overrides
|
||||
)
|
||||
return yaml.safe_dump(
|
||||
merged_resultant,
|
||||
default_flow_style=False,
|
||||
width=1000,
|
||||
)
|
||||
|
||||
def _merge_dict(self, base_items, new_items):
|
||||
"""Recursively merge new_items into base_items.
|
||||
|
||||
:param base_items: ``dict``
|
||||
:param new_items: ``dict``
|
||||
:returns: ``dict``
|
||||
"""
|
||||
for key, value in new_items.iteritems():
|
||||
if isinstance(value, dict):
|
||||
base_items[key] = self._merge_dict(
|
||||
base_items.get(key, {}),
|
||||
value
|
||||
)
|
||||
elif isinstance(value, list):
|
||||
if key in base_items and isinstance(base_items[key], list):
|
||||
base_items[key].extend(value)
|
||||
else:
|
||||
base_items[key] = value
|
||||
else:
|
||||
base_items[key] = new_items[key]
|
||||
return base_items
|
||||
|
||||
def run(self, conn, tmp, module_name, module_args, inject,
|
||||
complex_args=None, **kwargs):
|
||||
"""Run the method"""
|
||||
if not self.runner.is_playbook:
|
||||
raise errors.AnsibleError(
|
||||
'FAILED: `config_templates` are only available in playbooks'
|
||||
)
|
||||
|
||||
options = self.grab_options(complex_args, module_args)
|
||||
try:
|
||||
source = options['src']
|
||||
dest = options['dest']
|
||||
|
||||
config_overrides = options.get('config_overrides', dict())
|
||||
config_type = options['config_type']
|
||||
assert config_type.lower() in ['ini', 'json', 'yaml']
|
||||
except KeyError as exp:
|
||||
result = dict(failed=True, msg=exp)
|
||||
return ReturnData(conn=conn, comm_ok=False, result=result)
|
||||
|
||||
source_template = template.template(
|
||||
self.runner.basedir,
|
||||
source,
|
||||
inject
|
||||
)
|
||||
|
||||
if '_original_file' in inject:
|
||||
source_file = utils.path_dwim_relative(
|
||||
inject['_original_file'],
|
||||
'templates',
|
||||
source_template,
|
||||
self.runner.basedir
|
||||
)
|
||||
else:
|
||||
source_file = utils.path_dwim(self.runner.basedir, source_template)
|
||||
|
||||
# Open the template file and return the data as a string. This is
|
||||
# being done here so that the file can be a vault encrypted file.
|
||||
resultant = template.template_from_file(
|
||||
self.runner.basedir,
|
||||
source_file,
|
||||
inject,
|
||||
vault_password=self.runner.vault_pass
|
||||
)
|
||||
|
||||
if config_overrides:
|
||||
type_merger = getattr(self, CONFIG_TYPES.get(config_type))
|
||||
resultant = type_merger(
|
||||
config_overrides=config_overrides,
|
||||
resultant=resultant
|
||||
)
|
||||
|
||||
# Retemplate the resultant object as it may have new data within it
|
||||
# as provided by an override variable.
|
||||
template.template_from_string(
|
||||
basedir=self.runner.basedir,
|
||||
data=resultant,
|
||||
vars=inject,
|
||||
fail_on_undefined=True
|
||||
)
|
||||
|
||||
# Access to protected method is unavoidable in Ansible 1.x.
|
||||
new_module_args = dict(
|
||||
src=self.runner._transfer_str(conn, tmp, 'source', resultant),
|
||||
dest=dest,
|
||||
original_basename=os.path.basename(source),
|
||||
follow=True,
|
||||
)
|
||||
|
||||
module_args_tmp = utils.merge_module_args(
|
||||
module_args,
|
||||
new_module_args
|
||||
)
|
||||
|
||||
# Remove data types that are not available to the copy module
|
||||
complex_args.pop('config_overrides')
|
||||
complex_args.pop('config_type')
|
||||
|
||||
# Return the copy module status. Access to protected method is
|
||||
# unavoidable in Ansible 1.x.
|
||||
return self.runner._execute_module(
|
||||
conn,
|
||||
tmp,
|
||||
'copy',
|
||||
module_args_tmp,
|
||||
inject=inject,
|
||||
complex_args=complex_args
|
||||
)
|
||||
|
77
callbacks/profile_tasks.py
Normal file
77
callbacks/profile_tasks.py
Normal file
@ -0,0 +1,77 @@
|
||||
# The MIT License (MIT)
|
||||
#
|
||||
# Copyright (c) 2015, Red Hat, Inc. and others
|
||||
# Copyright (c) 2015, Rackspace US, Inc.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
# THE SOFTWARE.
|
||||
#
|
||||
# ----------------------------------------------------------------------------
|
||||
#
|
||||
# Note that this callback plugin isn't enabled by default. If you'd like to
|
||||
# enable it, add the following line to ansible.cfg in the 'playbooks'
|
||||
# directory in this repository:
|
||||
#
|
||||
# callback_plugins = plugins/callbacks
|
||||
#
|
||||
# Add that line prior to running the playbooks and you will have detailed
|
||||
# timing information for Ansible tasks right after each playbook finishes
|
||||
# running.
|
||||
#
|
||||
import time
|
||||
|
||||
|
||||
class CallbackModule(object):
|
||||
"""
|
||||
A plugin for timing tasks
|
||||
"""
|
||||
def __init__(self):
|
||||
self.stats = {}
|
||||
self.current = None
|
||||
|
||||
def playbook_on_task_start(self, name, is_conditional):
|
||||
"""
|
||||
Logs the start of each task
|
||||
"""
|
||||
if self.current is not None:
|
||||
# Record the running time of the last executed task
|
||||
self.stats[self.current] = time.time() - self.stats[self.current]
|
||||
|
||||
# Record the start time of the current task
|
||||
self.current = name
|
||||
self.stats[self.current] = time.time()
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
"""
|
||||
Prints the timings
|
||||
"""
|
||||
# Record the timing of the very last task
|
||||
if self.current is not None:
|
||||
self.stats[self.current] = time.time() - self.stats[self.current]
|
||||
|
||||
# Sort the tasks by their running time
|
||||
results = sorted(self.stats.items(), key=lambda value: value[1],
|
||||
reverse=True)
|
||||
|
||||
# Just keep the top 10
|
||||
results = results[:10]
|
||||
|
||||
# Print the timings
|
||||
for name, elapsed in results:
|
||||
print "{0:-<70}{1:->9}".format('{0} '.format(name),
|
||||
' {0:.02f}s'.format(elapsed))
|
6
dev-requirements.txt
Normal file
6
dev-requirements.txt
Normal file
@ -0,0 +1,6 @@
|
||||
ansible-lint
|
||||
ansible>=1.9.1,<2.0.0
|
||||
|
||||
# this is required for the docs build jobs
|
||||
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
|
||||
oslosphinx>=2.5.0 # Apache-2.0
|
195
doc/Makefile
Normal file
195
doc/Makefile
Normal file
@ -0,0 +1,195 @@
|
||||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = build
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
||||
endif
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " applehelp to make an Apple Help Book"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " xml to make Docutils-native XML files"
|
||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
@echo " coverage to run coverage check of the documentation (if enabled)"
|
||||
|
||||
clean:
|
||||
rm -rf $(BUILDDIR)/*
|
||||
|
||||
html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/openstack-ansible-plugins.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/openstack-ansible-plugins.qhc"
|
||||
|
||||
applehelp:
|
||||
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
|
||||
@echo
|
||||
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
|
||||
@echo "N.B. You won't be able to view it unless you put it in" \
|
||||
"~/Library/Documentation/Help or install it in your application" \
|
||||
"bundle."
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/openstack-ansible-plugins"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/openstack-ansible-plugins"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
latexpdfja:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C $(BUILDDIR)/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
|
||||
coverage:
|
||||
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
|
||||
@echo "Testing of coverage in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/coverage/python.txt."
|
||||
|
||||
xml:
|
||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
||||
@echo
|
||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
||||
|
||||
pseudoxml:
|
||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
||||
@echo
|
||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
||||
|
||||
livehtml: html
|
||||
sphinx-autobuild -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
290
doc/source/conf.py
Normal file
290
doc/source/conf.py
Normal file
@ -0,0 +1,290 @@
|
||||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# openstack-ansible-plugins documentation build configuration file, created by
|
||||
# sphinx-quickstart on Mon Apr 13 20:42:26 2015.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'oslosphinx'
|
||||
]
|
||||
|
||||
# The link to the browsable source code (for the left hand menu)
|
||||
oslosphinx_cgit_link = 'http://git.openstack.org/cgit/openstack/openstack-ansible-plugins'
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
# source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = 'openstack-ansible-plugins'
|
||||
copyright = '2015, openstack-ansible-plugins contributors'
|
||||
author = 'openstack-ansible-plugins contributors'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
version = 'master'
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = 'master'
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
# default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
# add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
# show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
# modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
# keep_warnings = False
|
||||
|
||||
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
||||
todo_include_todos = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
# html_theme = 'alabaster'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
# html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
# html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
# html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
# html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
# html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
# html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
# html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
# html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = None
|
||||
|
||||
# Language to be used for generating the HTML full-text search index.
|
||||
# Sphinx supports the following languages:
|
||||
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
|
||||
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr'
|
||||
# html_search_language = 'en'
|
||||
|
||||
# A dictionary with options for the search language support, empty by default.
|
||||
# Now only 'ja' uses this config value
|
||||
# html_search_options = {'type': 'default'}
|
||||
|
||||
# The name of a javascript file (relative to the configuration directory) that
|
||||
# implements a search results scorer. If empty, the default will be used.
|
||||
# html_search_scorer = 'scorer.js'
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'openstack-ansible-pluginsdoc'
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
# 'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
# 'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
# 'preamble': '',
|
||||
|
||||
# Latex figure (float) alignment
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(master_doc, 'openstack-ansible-plugins.tex',
|
||||
'openstack-ansible-plugins Documentation',
|
||||
'openstack-ansible-plugins contributors', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
# latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
(master_doc, 'openstack-ansible-plugins',
|
||||
'openstack-ansible-plugins Documentation',
|
||||
[author], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(master_doc, 'openstack-ansible-plugins',
|
||||
'openstack-ansible-plugins Documentation',
|
||||
author, 'openstack-ansible-plugins', 'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
# texinfo_no_detailmenu = False
|
36
doc/source/index.rst
Normal file
36
doc/source/index.rst
Normal file
@ -0,0 +1,36 @@
|
||||
plugins Docs
|
||||
============
|
||||
|
||||
These are the plugins the OpenStack-Ansible deployment project relies on.
|
||||
The plugins can be added to any openstack deployment by quite simply cloning
|
||||
this repository into your plugin and library source and setting up the
|
||||
``ansible.cfg`` file to point at them as additional plugins for your project.
|
||||
|
||||
|
||||
Example ansible.cfg file
|
||||
------------------------
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[defaults]
|
||||
lookup_plugins = /etc/ansible/plugins/lookups
|
||||
filter_plugins = /etc/ansible/plugins/filters
|
||||
action_plugins = /etc/ansible/plugins/actions
|
||||
library = /etc/ansible/plugins/library
|
||||
|
||||
|
||||
Example role requirement overload for automatic plugin download
|
||||
---------------------------------------------------------------
|
||||
|
||||
The Ansible role requirement file can be used to overload the ``ansible-galaxy``
|
||||
command to automatically fetch the plugins for you in a given project. To do this
|
||||
add the following lines to your ``ansible-role-requirements.yml`` file.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: plugins
|
||||
src: https://github.com/openstack/openstack-ansible-plugins
|
||||
path: /etc/ansible
|
||||
scm: git
|
||||
version: master
|
||||
|
244
filters/osa-filters.py
Normal file
244
filters/osa-filters.py
Normal file
@ -0,0 +1,244 @@
|
||||
# Copyright 2015, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# (c) 2015, Kevin Carter <kevin.carter@rackspace.com>
|
||||
|
||||
import os
|
||||
import re
|
||||
import urlparse
|
||||
import hashlib
|
||||
|
||||
from ansible import errors
|
||||
|
||||
"""Filter usage:
|
||||
|
||||
Simple filters that may be useful from within the stack
|
||||
"""
|
||||
|
||||
|
||||
def _pip_requirement_split(requirement):
|
||||
version_descriptors = "(>=|<=|>|<|==|~=|!=)"
|
||||
requirement = requirement.split(';')
|
||||
requirement_info = re.split(r'%s\s*' % version_descriptors, requirement[0])
|
||||
name = requirement_info[0]
|
||||
marker = None
|
||||
if len(requirement) > 1:
|
||||
marker = requirement[1]
|
||||
versions = None
|
||||
if len(requirement_info) > 1:
|
||||
versions = requirement_info[1]
|
||||
|
||||
return name, versions, marker
|
||||
|
||||
|
||||
def _lower_set_lists(list_one, list_two):
|
||||
|
||||
_list_one = set([i.lower() for i in list_one])
|
||||
_list_two = set([i.lower() for i in list_two])
|
||||
return _list_one, _list_two
|
||||
|
||||
|
||||
def bit_length_power_of_2(value):
|
||||
"""Return the smallest power of 2 greater than a numeric value.
|
||||
|
||||
:param value: Number to find the smallest power of 2
|
||||
:type value: ``int``
|
||||
:returns: ``int``
|
||||
"""
|
||||
return 2**(int(value)-1).bit_length()
|
||||
|
||||
|
||||
def get_netloc(url):
|
||||
"""Return the netloc from a URL.
|
||||
|
||||
If the input value is not a value URL the method will raise an Ansible
|
||||
filter exception.
|
||||
|
||||
:param url: the URL to parse
|
||||
:type url: ``str``
|
||||
:returns: ``str``
|
||||
"""
|
||||
try:
|
||||
netloc = urlparse.urlparse(url).netloc
|
||||
except Exception as exp:
|
||||
raise errors.AnsibleFilterError(
|
||||
'Failed to return the netloc of: "%s"' % str(exp)
|
||||
)
|
||||
else:
|
||||
return netloc
|
||||
|
||||
|
||||
def get_netloc_no_port(url):
|
||||
"""Return the netloc without a port from a URL.
|
||||
|
||||
If the input value is not a value URL the method will raise an Ansible
|
||||
filter exception.
|
||||
|
||||
:param url: the URL to parse
|
||||
:type url: ``str``
|
||||
:returns: ``str``
|
||||
"""
|
||||
return get_netloc(url=url).split(':')[0]
|
||||
|
||||
|
||||
def get_netorigin(url):
|
||||
"""Return the netloc from a URL.
|
||||
|
||||
If the input value is not a value URL the method will raise an Ansible
|
||||
filter exception.
|
||||
|
||||
:param url: the URL to parse
|
||||
:type url: ``str``
|
||||
:returns: ``str``
|
||||
"""
|
||||
try:
|
||||
parsed_url = urlparse.urlparse(url)
|
||||
netloc = parsed_url.netloc
|
||||
scheme = parsed_url.scheme
|
||||
except Exception as exp:
|
||||
raise errors.AnsibleFilterError(
|
||||
'Failed to return the netorigin of: "%s"' % str(exp)
|
||||
)
|
||||
else:
|
||||
return '%s://%s' % (scheme, netloc)
|
||||
|
||||
|
||||
def string_2_int(string):
|
||||
"""Return the an integer from a string.
|
||||
|
||||
The string is hashed, converted to a base36 int, and the modulo of 10240
|
||||
is returned.
|
||||
|
||||
:param string: string to retrieve an int from
|
||||
:type string: ``str``
|
||||
:returns: ``int``
|
||||
"""
|
||||
# Try to encode utf-8 else pass
|
||||
try:
|
||||
string = string.encode('utf-8')
|
||||
except AttributeError:
|
||||
pass
|
||||
hashed_name = hashlib.sha256(string).hexdigest()
|
||||
return int(hashed_name, 36) % 10240
|
||||
|
||||
|
||||
def pip_requirement_names(requirements):
|
||||
"""Return a ``str`` of requirement name and list of versions.
|
||||
:param requirement: Name of a requirement that may have versions within
|
||||
it. This will use the constant,
|
||||
VERSION_DESCRIPTORS.
|
||||
:type requirement: ``str``
|
||||
:return: ``str``
|
||||
"""
|
||||
|
||||
named_requirements = list()
|
||||
for requirement in requirements:
|
||||
name = _pip_requirement_split(requirement)[0]
|
||||
if name and not name.startswith('#'):
|
||||
named_requirements.append(name.lower())
|
||||
|
||||
return sorted(set(named_requirements))
|
||||
|
||||
|
||||
def pip_constraint_update(list_one, list_two):
|
||||
|
||||
_list_one, _list_two = _lower_set_lists(list_one, list_two)
|
||||
_list_one, _list_two = list(_list_one), list(_list_two)
|
||||
for item2 in _list_two:
|
||||
item2_name, item2_versions, _ = _pip_requirement_split(item2)
|
||||
if item2_versions:
|
||||
for item1 in _list_one:
|
||||
if item2_name == _pip_requirement_split(item1)[0]:
|
||||
item1_index = _list_one.index(item1)
|
||||
_list_one[item1_index] = item2
|
||||
break
|
||||
else:
|
||||
_list_one.append(item2)
|
||||
|
||||
return sorted(_list_one)
|
||||
|
||||
|
||||
def splitlines(string_with_lines):
|
||||
"""Return a ``list`` from a string with lines."""
|
||||
|
||||
return string_with_lines.splitlines()
|
||||
|
||||
|
||||
def filtered_list(list_one, list_two):
|
||||
|
||||
_list_one, _list_two = _lower_set_lists(list_one, list_two)
|
||||
return list(_list_one-_list_two)
|
||||
|
||||
|
||||
def git_link_parse(repo):
|
||||
"""Return a dict containing the parts of a git repository.
|
||||
|
||||
:param repo: git repo string to parse.
|
||||
:type repo: ``str``
|
||||
:returns: ``dict``
|
||||
"""
|
||||
|
||||
if 'git+' in repo:
|
||||
_git_url = repo.split('git+', 1)[-1]
|
||||
else:
|
||||
_git_url = repo
|
||||
|
||||
if '@' in _git_url:
|
||||
url, branch = _git_url.split('@', 1)
|
||||
else:
|
||||
url = _git_url
|
||||
branch = 'master'
|
||||
|
||||
name = os.path.basename(url.rstrip('/'))
|
||||
_branch = branch.split('#')
|
||||
branch = _branch[0]
|
||||
|
||||
plugin_path = None
|
||||
# Determine if the package is a plugin type
|
||||
if len(_branch) > 1 and 'subdirectory=' in _branch[-1]:
|
||||
plugin_path = _branch[-1].split('subdirectory=')[-1].split('&')[0]
|
||||
|
||||
return {
|
||||
'name': name.split('.git')[0].lower(),
|
||||
'version': branch,
|
||||
'plugin_path': plugin_path,
|
||||
'url': url,
|
||||
'original': repo
|
||||
}
|
||||
|
||||
|
||||
def git_link_parse_name(repo):
|
||||
"""Return the name of a git repo."""
|
||||
|
||||
return git_link_parse(repo)['name']
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
"""Ansible jinja2 filters."""
|
||||
|
||||
@staticmethod
|
||||
def filters():
|
||||
return {
|
||||
'bit_length_power_of_2': bit_length_power_of_2,
|
||||
'netloc': get_netloc,
|
||||
'netloc_no_port': get_netloc_no_port,
|
||||
'netorigin': get_netorigin,
|
||||
'string_2_int': string_2_int,
|
||||
'pip_requirement_names': pip_requirement_names,
|
||||
'pip_constraint_update': pip_constraint_update,
|
||||
'splitlines': splitlines,
|
||||
'filtered_list': filtered_list,
|
||||
'git_link_parse': git_link_parse,
|
||||
'git_link_parse_name': git_link_parse_name
|
||||
}
|
66
library/config_template
Normal file
66
library/config_template
Normal file
@ -0,0 +1,66 @@
|
||||
# this is a virtual module that is entirely implemented server side
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: config_template
|
||||
version_added: 1.9.2
|
||||
short_description: Renders template files providing a create/update override interface
|
||||
description:
|
||||
- The module contains the template functionality with the ability to override items
|
||||
in config, in transit, through the use of a simple dictionary without having to
|
||||
write out various temp files on target machines. The module renders all of the
|
||||
potential jinja a user could provide in both the template file and in the override
|
||||
dictionary which is ideal for deployers who may have lots of different configs
|
||||
using a similar code base.
|
||||
- The module is an extension of the **copy** module and all of attributes that can be
|
||||
set there are available to be set here.
|
||||
options:
|
||||
src:
|
||||
description:
|
||||
- Path of a Jinja2 formatted template on the local server. This can be a relative
|
||||
or absolute path.
|
||||
required: true
|
||||
default: null
|
||||
dest:
|
||||
description:
|
||||
- Location to render the template to on the remote machine.
|
||||
required: true
|
||||
default: null
|
||||
config_overrides:
|
||||
description:
|
||||
- A dictionary used to update or override items within a configuration template.
|
||||
The dictionary data structure may be nested. If the target config file is an ini
|
||||
file the nested keys in the ``config_overrides`` will be used as section
|
||||
headers.
|
||||
config_type:
|
||||
description:
|
||||
- A string value describing the target config type.
|
||||
choices:
|
||||
- ini
|
||||
- json
|
||||
- yaml
|
||||
author: Kevin Carter
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
- name: run config template ini
|
||||
config_template:
|
||||
src: templates/test.ini.j2
|
||||
dest: /tmp/test.ini
|
||||
config_overrides: {}
|
||||
config_type: ini
|
||||
|
||||
- name: run config template json
|
||||
config_template:
|
||||
src: templates/test.json.j2
|
||||
dest: /tmp/test.json
|
||||
config_overrides: {}
|
||||
config_type: json
|
||||
|
||||
- name: run config template yaml
|
||||
config_template:
|
||||
src: templates/test.yaml.j2
|
||||
dest: /tmp/test.yaml
|
||||
config_overrides: {}
|
||||
config_type: yaml
|
||||
"""
|
168
library/dist_sort
Normal file
168
library/dist_sort
Normal file
@ -0,0 +1,168 @@
|
||||
#!/usr/bin/env python
|
||||
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
|
||||
#
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: dist_sort
|
||||
version_added: "1.6.6"
|
||||
short_description:
|
||||
- Deterministically sort a list to distribute the elements in the list
|
||||
evenly. Based on external values such as host or static modifier. Returns
|
||||
a string as named key ``sorted_list``.
|
||||
description:
|
||||
- This module returns a list of servers uniquely sorted based on a index
|
||||
from a look up value location within a group. The group should be an
|
||||
existing ansible inventory group. This will module returns the sorted
|
||||
list as a delimited string.
|
||||
options:
|
||||
src_list:
|
||||
description:
|
||||
- list in the form of a string separated by a delimiter.
|
||||
required: True
|
||||
ref_list:
|
||||
description:
|
||||
- list to lookup value_to_lookup against to return index number
|
||||
This should be a pre-determined ansible group containing the
|
||||
``value_to_lookup``.
|
||||
required: False
|
||||
value_to_lookup:
|
||||
description:
|
||||
- value is looked up against ref_list to get index number.
|
||||
required: False
|
||||
sort_modifier:
|
||||
description:
|
||||
- add a static int into the sort equation to weight the output.
|
||||
type: int
|
||||
default: 0
|
||||
delimiter:
|
||||
description:
|
||||
- delimiter used to parse ``src_list`` with.
|
||||
default: ','
|
||||
author:
|
||||
- Kevin Carter
|
||||
- Sam Yaple
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
- dist_sort:
|
||||
value_to_lookup: "Hostname-in-ansible-group_name"
|
||||
ref_list: "{{ groups['group_name'] }}"
|
||||
src_list: "Server1,Server2,Server3"
|
||||
register: test_var
|
||||
|
||||
# With a pre-set delimiter
|
||||
- dist_sort:
|
||||
value_to_lookup: "Hostname-in-ansible-group_name"
|
||||
ref_list: "{{ groups['group_name'] }}"
|
||||
src_list: "Server1|Server2|Server3"
|
||||
delimiter: '|'
|
||||
register: test_var
|
||||
|
||||
# With a set modifier
|
||||
- dist_sort:
|
||||
value_to_lookup: "Hostname-in-ansible-group_name"
|
||||
ref_list: "{{ groups['group_name'] }}"
|
||||
src_list: "Server1#Server2#Server3"
|
||||
delimiter: '#'
|
||||
sort_modifier: 5
|
||||
register: test_var
|
||||
"""
|
||||
|
||||
|
||||
class DistSort(object):
|
||||
def __init__(self, module):
|
||||
"""Deterministically sort a list of servers.
|
||||
|
||||
:param module: The active ansible module.
|
||||
:type module: ``class``
|
||||
"""
|
||||
self.module = module
|
||||
self.params = self.module.params
|
||||
self.return_data = self._runner()
|
||||
|
||||
def _runner(self):
|
||||
"""Return the sorted list of servers.
|
||||
|
||||
Based on the modulo of index of a *value_to_lookup* from an ansible
|
||||
group this function will return a comma "delimiter" separated list of
|
||||
items.
|
||||
|
||||
:returns: ``str``
|
||||
"""
|
||||
index = self.params['ref_list'].index(self.params['value_to_lookup'])
|
||||
index += self.params['sort_modifier']
|
||||
src_list = self.params['src_list'].split(
|
||||
self.params['delimiter']
|
||||
)
|
||||
|
||||
for _ in range(index % len(src_list)):
|
||||
src_list.append(src_list.pop(0))
|
||||
else:
|
||||
return self.params['delimiter'].join(src_list)
|
||||
|
||||
|
||||
def main():
|
||||
"""Run the main app."""
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
value_to_lookup=dict(
|
||||
required=True,
|
||||
type='str'
|
||||
),
|
||||
ref_list=dict(
|
||||
required=True,
|
||||
type='list'
|
||||
),
|
||||
src_list=dict(
|
||||
required=True,
|
||||
type='str'
|
||||
),
|
||||
delimiter=dict(
|
||||
required=False,
|
||||
type='str',
|
||||
default=','
|
||||
),
|
||||
sort_modifier=dict(
|
||||
required=False,
|
||||
type='str',
|
||||
default='0'
|
||||
)
|
||||
),
|
||||
supports_check_mode=False
|
||||
)
|
||||
try:
|
||||
# This is done so that the failure can be parsed and does not cause
|
||||
# ansible to fail if a non-int is passed.
|
||||
module.params['sort_modifier'] = int(module.params['sort_modifier'])
|
||||
|
||||
_ds = DistSort(module=module)
|
||||
if _ds.return_data == module.params['src_list']:
|
||||
_changed = False
|
||||
else:
|
||||
_changed = True
|
||||
|
||||
module.exit_json(changed=_changed, **{'sorted_list': _ds.return_data})
|
||||
except Exception as exp:
|
||||
resp = {'stderr': str(exp)}
|
||||
resp.update(module.params)
|
||||
module.fail_json(msg='Failed Process', **resp)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
236
library/glance
Normal file
236
library/glance
Normal file
@ -0,0 +1,236 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import glanceclient.client as glclient
|
||||
import keystoneclient.v3.client as ksclient
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: glance
|
||||
short_description:
|
||||
- Basic module for interacting with openstack glance
|
||||
description:
|
||||
- Basic module for interacting with openstack glance
|
||||
options:
|
||||
command:
|
||||
description:
|
||||
- Operation for the module to perform. Currently available
|
||||
choices:
|
||||
- image-list
|
||||
- image-create
|
||||
openrc_path:
|
||||
decription:
|
||||
- Path to openrc file from which credentials and keystoneclient
|
||||
- endpoint will be extracted
|
||||
image_name:
|
||||
description:
|
||||
- Name of the image to create
|
||||
image_url:
|
||||
description:
|
||||
- URL from which to download the image data
|
||||
image_container_format:
|
||||
description:
|
||||
- container format that the image uses (bare)
|
||||
image_disk_format:
|
||||
description:
|
||||
- disk format that the image uses
|
||||
image_is_public:
|
||||
description:
|
||||
- Should the image be visible to all tenants?
|
||||
choices:
|
||||
- true (public)
|
||||
- false (private)
|
||||
api_version:
|
||||
description:
|
||||
- which version of the glance api to use
|
||||
choices:
|
||||
- 1
|
||||
- 2
|
||||
default: 1
|
||||
insecure:
|
||||
description:
|
||||
- Explicitly allow client to perform "insecure" TLS
|
||||
choices:
|
||||
- false
|
||||
- true
|
||||
default: false
|
||||
author: Hugh Saunders
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
# Create an image
|
||||
- name: Ensure cirros image
|
||||
glance:
|
||||
command: 'image-create'
|
||||
openrc_path: /root/openrc
|
||||
image_name: cirros
|
||||
image_url: 'https://example-domain.com/cirros-0.3.2-source.tar.gz'
|
||||
image_container_format: bare
|
||||
image_disk_format: qcow2
|
||||
image_is_public: True
|
||||
|
||||
# Get facts about existing images
|
||||
- name: Get image facts
|
||||
glance:
|
||||
command: 'image-list'
|
||||
openrc_path: /root/openrc
|
||||
"""
|
||||
|
||||
|
||||
COMMAND_MAP = {'image-list': 'list_images',
|
||||
'image-create': 'create_image'}
|
||||
|
||||
|
||||
class ManageGlance(object):
|
||||
def __init__(self, module):
|
||||
self.state_change = False
|
||||
self.glance = None
|
||||
self.keystone = None
|
||||
self.module = module
|
||||
try:
|
||||
self._keystone_authenticate()
|
||||
self._init_glance()
|
||||
except Exception as e:
|
||||
self.module.fail_json(
|
||||
err="Initialisation Error: %s" % e,
|
||||
rc=2, msg=str(e))
|
||||
|
||||
def _parse_openrc(self):
|
||||
"""Get credentials from an openrc file."""
|
||||
openrc_path = self.module.params['openrc_path']
|
||||
line_re = re.compile('^export (?P<key>OS_\w*)=(?P<value>[^\n]*)')
|
||||
with open(openrc_path) as openrc:
|
||||
matches = [line_re.match(l) for l in openrc]
|
||||
return dict(
|
||||
(g.groupdict()['key'], g.groupdict()['value'])
|
||||
for g in matches if g
|
||||
)
|
||||
|
||||
def _keystone_authenticate(self):
|
||||
"""Authenticate with Keystone."""
|
||||
openrc = self._parse_openrc()
|
||||
insecure = self.module.params['insecure']
|
||||
self.keystone = ksclient.Client(insecure=insecure,
|
||||
username=openrc['OS_USERNAME'],
|
||||
password=openrc['OS_PASSWORD'],
|
||||
project_name=openrc['OS_PROJECT_NAME'],
|
||||
auth_url=openrc['OS_AUTH_URL'])
|
||||
|
||||
def _init_glance(self):
|
||||
"""Create glance client object using token and url from keystone."""
|
||||
openrc = self._parse_openrc()
|
||||
p = self.module.params
|
||||
v = p['api_version']
|
||||
ep = self.keystone.service_catalog.url_for(
|
||||
service_type='image',
|
||||
endpoint_type=openrc['OS_ENDPOINT_TYPE']
|
||||
)
|
||||
|
||||
self.glance = glclient.Client(
|
||||
endpoint='%s/v%s' % (ep, v),
|
||||
token=self.keystone.get_token(self.keystone.session)
|
||||
)
|
||||
|
||||
def route(self):
|
||||
"""Run the command specified by the command parameter."""
|
||||
getattr(self, COMMAND_MAP[self.module.params['command']])()
|
||||
|
||||
def _get_image_facts(self):
|
||||
"""Helper function to format image list as a dictionary."""
|
||||
p = self.module.params
|
||||
v = p['api_version']
|
||||
if v == '1':
|
||||
return dict(
|
||||
(i.name, i.to_dict()) for i in self.glance.images.list()
|
||||
)
|
||||
elif v == '2':
|
||||
return dict(
|
||||
(i.name, i) for i in self.glance.images.list()
|
||||
)
|
||||
|
||||
def list_images(self):
|
||||
"""Get information about available glance images.
|
||||
|
||||
Returns as a fact dictionary glance_images
|
||||
"""
|
||||
self.module.exit_json(
|
||||
changed=self.state_change,
|
||||
ansible_facts=dict(glance_images=self._get_image_facts()))
|
||||
|
||||
def create_image(self):
|
||||
"""Create a glance image that references a remote url."""
|
||||
p = self.module.params
|
||||
v = p['api_version']
|
||||
image_name = p['image_name']
|
||||
image_opts = dict(
|
||||
name=image_name,
|
||||
disk_format=p['image_disk_format'],
|
||||
container_format=p['image_container_format'],
|
||||
copy_from=p['image_url']
|
||||
)
|
||||
if v == '1':
|
||||
image_opts['is_public'] = p['image_is_public']
|
||||
elif v == '2':
|
||||
if p['image_is_public']:
|
||||
vis = 'public'
|
||||
else:
|
||||
vis = 'private'
|
||||
image_opts['visibility'] = vis
|
||||
|
||||
images = {i.name for i in self.glance.images.list()}
|
||||
if image_name in images:
|
||||
self.module.exit_json(
|
||||
changed=self.state_change,
|
||||
ansible_facts=dict(
|
||||
glance_images=self._get_image_facts()
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.glance.images.create(**image_opts)
|
||||
self.state_change = True
|
||||
self.module.exit_json(
|
||||
changed=self.state_change,
|
||||
ansible_facts=dict(
|
||||
glance_images=self._get_image_facts()
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
command=dict(required=True, choices=COMMAND_MAP.keys()),
|
||||
openrc_path=dict(required=True),
|
||||
image_name=dict(required=False),
|
||||
image_url=dict(required=False),
|
||||
image_container_format=dict(required=False),
|
||||
image_disk_format=dict(required=False),
|
||||
image_is_public=dict(required=False, choices=BOOLEANS),
|
||||
api_version=dict(default='1', required=False, choices=['1', '2']),
|
||||
insecure=dict(default=False, required=False,
|
||||
choices=BOOLEANS + ['True', 'False'])
|
||||
),
|
||||
supports_check_mode=False
|
||||
)
|
||||
mg = ManageGlance(module)
|
||||
mg.route()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
1309
library/keystone
Normal file
1309
library/keystone
Normal file
File diff suppressed because it is too large
Load Diff
598
library/memcached
Normal file
598
library/memcached
Normal file
@ -0,0 +1,598 @@
|
||||
#!/usr/bin/python
|
||||
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
|
||||
#
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import base64
|
||||
import os
|
||||
import stat
|
||||
import sys
|
||||
|
||||
import memcache
|
||||
try:
|
||||
from Crypto.Cipher import AES
|
||||
from Crypto import Random
|
||||
|
||||
ENCRYPT_IMPORT = True
|
||||
except ImportError:
|
||||
ENCRYPT_IMPORT = False
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: memcached
|
||||
version_added: "1.6.6"
|
||||
short_description:
|
||||
- Add, remove, and get items from memcached
|
||||
description:
|
||||
- Add, remove, and get items from memcached
|
||||
options:
|
||||
name:
|
||||
description:
|
||||
- Memcached key name
|
||||
required: true
|
||||
content:
|
||||
description:
|
||||
- Add content to memcached. Only used when state is 'present'.
|
||||
required: false
|
||||
file_path:
|
||||
description:
|
||||
- This can be used with state 'present' and 'retrieve'. When set
|
||||
with state 'present' the contents of a file will be used, when
|
||||
set with state 'retrieve' the contents of the memcached key will
|
||||
be written to a file.
|
||||
required: false
|
||||
state:
|
||||
description:
|
||||
- ['absent', 'present', 'retrieve']
|
||||
required: true
|
||||
server:
|
||||
description:
|
||||
- server IP address and port. This can be a comma separated list of
|
||||
servers to connect to.
|
||||
required: true
|
||||
encrypt_string:
|
||||
description:
|
||||
- Encrypt/Decrypt a memcached object using a provided value.
|
||||
required: false
|
||||
dir_mode:
|
||||
description:
|
||||
- If a directory is created when using the ``file_path`` argument
|
||||
the directory will be created with a set mode.
|
||||
default: '0755'
|
||||
required: false
|
||||
file_mode:
|
||||
description:
|
||||
- If a file is created when using the ``file_path`` argument
|
||||
the file will be created with a set mode.
|
||||
default: '0644'
|
||||
required: false
|
||||
expires:
|
||||
description:
|
||||
- Seconds until an item is expired from memcached.
|
||||
default: 300
|
||||
required: false
|
||||
notes:
|
||||
- The "absent" state will remove an item from memcached.
|
||||
- The "present" state will place an item from a string or a file into
|
||||
memcached.
|
||||
- The "retrieve" state will get an item from memcached and return it as a
|
||||
string. If a ``file_path`` is set this module will also write the value
|
||||
to a file.
|
||||
- All items added into memcached are base64 encoded.
|
||||
- All items retrieved will attempt base64 decode and return the string
|
||||
value if not applicable.
|
||||
- Items retrieve from memcached are returned within a "value" key unless
|
||||
a ``file_path`` is specified which would then write the contents of the
|
||||
memcached key to a file.
|
||||
- The ``file_path`` and ``content`` fields are mutually exclusive.
|
||||
- If you'd like to encrypt items in memcached PyCrypto is a required.
|
||||
requirements:
|
||||
- "python-memcached"
|
||||
optional_requirements:
|
||||
- "pycrypto"
|
||||
author: Kevin Carter
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
# Add an item into memcached.
|
||||
- memcached:
|
||||
name: "key_name"
|
||||
content: "Super awesome value"
|
||||
state: "present"
|
||||
server: "localhost:11211"
|
||||
|
||||
# Read the contents of a memcached key, returned as "memcached_phrase.value".
|
||||
- memcached:
|
||||
name: "key_name"
|
||||
state: "retrieve"
|
||||
server: "localhost:11211"
|
||||
register: memcached_key
|
||||
|
||||
# Add the contents of a file into memcached.
|
||||
- memcached:
|
||||
name: "key_name"
|
||||
file_path: "/home/user_name/file.txt"
|
||||
state: "present"
|
||||
server: "localhost:11211"
|
||||
|
||||
# Write the contents of a memcached key to a file and is returned as
|
||||
# "memcached_phrase.value".
|
||||
- memcached:
|
||||
name: "key_name"
|
||||
file_path: "/home/user_name/file.txt"
|
||||
state: "retrieve"
|
||||
server: "localhost:11211"
|
||||
register: memcached_key
|
||||
|
||||
# Delete an item from memcached.
|
||||
- memcached:
|
||||
name: "key_name"
|
||||
state: "absent"
|
||||
server: "localhost:11211"
|
||||
"""
|
||||
|
||||
SERVER_MAX_VALUE_LENGTH = 1024 * 256
|
||||
|
||||
MAX_MEMCACHED_CHUNKS = 256
|
||||
|
||||
|
||||
class AESCipher(object):
|
||||
"""Encrypt an a string in using AES.
|
||||
|
||||
Solution derived from "http://stackoverflow.com/a/21928790"
|
||||
"""
|
||||
def __init__(self, key):
|
||||
if ENCRYPT_IMPORT is False:
|
||||
raise ImportError(
|
||||
'PyCrypto failed to be imported. Encryption is not supported'
|
||||
' on this system until PyCrypto is installed.'
|
||||
)
|
||||
|
||||
self.bs = 32
|
||||
if len(key) >= 32:
|
||||
self.key = key[:32]
|
||||
else:
|
||||
self.key = self._pad(key)
|
||||
|
||||
def encrypt(self, raw):
|
||||
"""Encrypt raw message.
|
||||
|
||||
:param raw: ``str``
|
||||
:returns: ``str`` Base64 encoded string.
|
||||
"""
|
||||
raw = self._pad(raw)
|
||||
iv = Random.new().read(AES.block_size)
|
||||
cipher = AES.new(self.key, AES.MODE_CBC, iv)
|
||||
return base64.b64encode(iv + cipher.encrypt(raw))
|
||||
|
||||
def decrypt(self, enc):
|
||||
"""Decrypt an encrypted message.
|
||||
|
||||
:param enc: ``str``
|
||||
:returns: ``str``
|
||||
"""
|
||||
enc = base64.b64decode(enc)
|
||||
iv = enc[:AES.block_size]
|
||||
cipher = AES.new(self.key, AES.MODE_CBC, iv)
|
||||
return self._unpad(cipher.decrypt(enc[AES.block_size:]))
|
||||
|
||||
def _pad(self, string):
|
||||
"""Pad an AES encryption key.
|
||||
|
||||
:param string: ``str``
|
||||
"""
|
||||
base = (self.bs - len(string) % self.bs)
|
||||
back = chr(self.bs - len(string) % self.bs)
|
||||
return string + base * back
|
||||
|
||||
@staticmethod
|
||||
def _unpad(string):
|
||||
"""Un-pad an AES encryption key.
|
||||
|
||||
:param string: ``str``
|
||||
"""
|
||||
ordinal_range = ord(string[len(string) - 1:])
|
||||
return string[:-ordinal_range]
|
||||
|
||||
|
||||
class Memcached(object):
|
||||
"""Manage objects within memcached."""
|
||||
def __init__(self, module):
|
||||
self.module = module
|
||||
self.state_change = False
|
||||
self.mc = None
|
||||
|
||||
def router(self):
|
||||
"""Route all commands to their respected functions.
|
||||
|
||||
If an exception happens a failure will be raised.
|
||||
"""
|
||||
|
||||
try:
|
||||
action = getattr(self, self.module.params['state'])
|
||||
self.mc = memcache.Client(
|
||||
self.module.params['server'].split(','),
|
||||
server_max_value_length=SERVER_MAX_VALUE_LENGTH,
|
||||
debug=0
|
||||
)
|
||||
facts = action()
|
||||
except Exception as exp:
|
||||
self._failure(error=str(exp), rc=1, msg='general exception')
|
||||
else:
|
||||
self.mc.disconnect_all()
|
||||
self.module.exit_json(
|
||||
changed=self.state_change, **facts
|
||||
)
|
||||
|
||||
def _failure(self, error, rc, msg):
|
||||
"""Return a Failure when running an Ansible command.
|
||||
|
||||
:param error: ``str`` Error that occurred.
|
||||
:param rc: ``int`` Return code while executing an Ansible command.
|
||||
:param msg: ``str`` Message to report.
|
||||
"""
|
||||
|
||||
self.module.fail_json(msg=msg, rc=rc, err=error)
|
||||
|
||||
def absent(self):
|
||||
"""Remove a key from memcached.
|
||||
|
||||
If the value is not deleted when instructed to do so an exception will
|
||||
be raised.
|
||||
|
||||
:return: ``dict``
|
||||
"""
|
||||
|
||||
key_name = self.module.params['name']
|
||||
get_keys = [
|
||||
'%s.%s' % (key_name, i) for i in xrange(MAX_MEMCACHED_CHUNKS)
|
||||
]
|
||||
self.mc.delete_multi(get_keys)
|
||||
value = self.mc.get_multi(get_keys)
|
||||
if not value:
|
||||
self.state_change = True
|
||||
return {'absent': True, 'key': self.module.params['name']}
|
||||
else:
|
||||
self._failure(
|
||||
error='Memcache key not deleted',
|
||||
rc=1,
|
||||
msg='Failed to remove an item from memcached please check your'
|
||||
' memcached server for issues. If you are load balancing'
|
||||
' memcached, attempt to connect to a single node.'
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _decode_value(value):
|
||||
"""Return a ``str`` from a base64 decoded value.
|
||||
|
||||
If the content is not a base64 ``str`` the raw value will be returned.
|
||||
|
||||
:param value: ``str``
|
||||
:return:
|
||||
"""
|
||||
|
||||
try:
|
||||
b64_value = base64.decodestring(value)
|
||||
except Exception:
|
||||
return value
|
||||
else:
|
||||
return b64_value
|
||||
|
||||
def _encode_value(self, value):
|
||||
"""Return a base64 encoded value.
|
||||
|
||||
If the value can't be base64 encoded an excption will be raised.
|
||||
|
||||
:param value: ``str``
|
||||
:return: ``str``
|
||||
"""
|
||||
|
||||
try:
|
||||
b64_value = base64.encodestring(value)
|
||||
except Exception as exp:
|
||||
self._failure(
|
||||
error=str(exp),
|
||||
rc=1,
|
||||
msg='The value provided can not be Base64 encoded.'
|
||||
)
|
||||
else:
|
||||
return b64_value
|
||||
|
||||
def _file_read(self, full_path, pass_on_error=False):
|
||||
"""Read the contents of a file.
|
||||
|
||||
This will read the contents of a file. If the ``full_path`` does not
|
||||
exist an exception will be raised.
|
||||
|
||||
:param full_path: ``str``
|
||||
:return: ``str``
|
||||
"""
|
||||
|
||||
try:
|
||||
with open(full_path, 'rb') as f:
|
||||
o_value = f.read()
|
||||
except IOError as exp:
|
||||
if pass_on_error is False:
|
||||
self._failure(
|
||||
error=str(exp),
|
||||
rc=1,
|
||||
msg="The file you've specified does not exist. Please"
|
||||
" check your full path @ [ %s ]." % full_path
|
||||
)
|
||||
else:
|
||||
return None
|
||||
else:
|
||||
return o_value
|
||||
|
||||
def _chown(self, path, mode_type):
|
||||
"""Chown a file or directory based on a given mode type.
|
||||
|
||||
If the file is modified the state will be changed.
|
||||
|
||||
:param path: ``str``
|
||||
:param mode_type: ``str``
|
||||
"""
|
||||
mode = self.module.params.get(mode_type)
|
||||
# Ensure that the mode type is a string.
|
||||
mode = str(mode)
|
||||
_mode = oct(stat.S_IMODE(os.stat(path).st_mode))
|
||||
if _mode != mode or _mode[1:] != mode:
|
||||
os.chmod(path, int(mode, 8))
|
||||
self.state_change = True
|
||||
|
||||
def _file_write(self, full_path, value):
|
||||
"""Write the contents of ``value`` to the ``full_path``.
|
||||
|
||||
This will return True upon success and will raise an exception upon
|
||||
failure.
|
||||
|
||||
:param full_path: ``str``
|
||||
:param value: ``str``
|
||||
:return: ``bol``
|
||||
"""
|
||||
|
||||
try:
|
||||
# Ensure that the directory exists
|
||||
dir_path = os.path.dirname(full_path)
|
||||
try:
|
||||
os.makedirs(dir_path)
|
||||
except OSError as exp:
|
||||
if exp.errno == errno.EEXIST and os.path.isdir(dir_path):
|
||||
pass
|
||||
else:
|
||||
self._failure(
|
||||
error=str(exp),
|
||||
rc=1,
|
||||
msg="The directory [ %s ] does not exist and couldn't"
|
||||
" be created. Please check the path and that you"
|
||||
" have permission to write the file."
|
||||
)
|
||||
|
||||
# Ensure proper directory permissions
|
||||
self._chown(path=dir_path, mode_type='dir_mode')
|
||||
|
||||
# Write contents of a cached key to a file.
|
||||
with open(full_path, 'wb') as f:
|
||||
if isinstance(value, list):
|
||||
f.writelines(value)
|
||||
else:
|
||||
f.write(value)
|
||||
|
||||
# Ensure proper file permissions
|
||||
self._chown(path=full_path, mode_type='file_mode')
|
||||
|
||||
except IOError as exp:
|
||||
self._failure(
|
||||
error=str(exp),
|
||||
rc=1,
|
||||
msg="There was an issue while attempting to write to the"
|
||||
" file [ %s ]. Please check your full path and"
|
||||
" permissions." % full_path
|
||||
)
|
||||
else:
|
||||
return True
|
||||
|
||||
def retrieve(self):
|
||||
"""Return a value from memcached.
|
||||
|
||||
If ``file_path`` is specified the value of the memcached key will be
|
||||
written to a file at the ``file_path`` location. If the value of a key
|
||||
is None, an exception will be raised.
|
||||
|
||||
:returns: ``dict``
|
||||
"""
|
||||
|
||||
key_name = self.module.params['name']
|
||||
get_keys = [
|
||||
'%s.%s' % (key_name, i) for i in xrange(MAX_MEMCACHED_CHUNKS)
|
||||
]
|
||||
multi_value = self.mc.get_multi(get_keys)
|
||||
if multi_value:
|
||||
value = ''.join([i for i in multi_value.values() if i is not None])
|
||||
# Get the file path if specified.
|
||||
file_path = self.module.params.get('file_path')
|
||||
if file_path is not None:
|
||||
full_path = os.path.abspath(os.path.expanduser(file_path))
|
||||
|
||||
# Decode cached value
|
||||
encrypt_string = self.module.params.get('encrypt_string')
|
||||
if encrypt_string:
|
||||
_d_value = AESCipher(key=encrypt_string)
|
||||
d_value = _d_value.decrypt(enc=value)
|
||||
if not d_value:
|
||||
d_value = self._decode_value(value=value)
|
||||
else:
|
||||
d_value = self._decode_value(value=value)
|
||||
|
||||
o_value = self._file_read(
|
||||
full_path=full_path, pass_on_error=True
|
||||
)
|
||||
|
||||
# compare old value to new value and write if different
|
||||
if o_value != d_value:
|
||||
self.state_change = True
|
||||
self._file_write(full_path=full_path, value=d_value)
|
||||
|
||||
return {
|
||||
'present': True,
|
||||
'key': self.module.params['name'],
|
||||
'value': value,
|
||||
'file_path': full_path
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'present': True,
|
||||
'key': self.module.params['name'],
|
||||
'value': value
|
||||
}
|
||||
else:
|
||||
self._failure(
|
||||
error='Memcache key not found',
|
||||
rc=1,
|
||||
msg='The key you specified was not found within memcached. '
|
||||
'If you are load balancing memcached, attempt to connect'
|
||||
' to a single node.'
|
||||
)
|
||||
|
||||
def present(self):
|
||||
"""Create and or update a key within Memcached.
|
||||
|
||||
The state processed here is present. This state will ensure that
|
||||
content is written to a memcached server. When ``file_path`` is
|
||||
specified the content will be read in from a file.
|
||||
"""
|
||||
|
||||
file_path = self.module.params.get('file_path')
|
||||
if file_path is not None:
|
||||
full_path = os.path.abspath(os.path.expanduser(file_path))
|
||||
# Read the contents of a file into memcached.
|
||||
o_value = self._file_read(full_path=full_path)
|
||||
else:
|
||||
o_value = self.module.params['content']
|
||||
|
||||
# Encode cached value
|
||||
encrypt_string = self.module.params.get('encrypt_string')
|
||||
if encrypt_string:
|
||||
_d_value = AESCipher(key=encrypt_string)
|
||||
d_value = _d_value.encrypt(raw=o_value)
|
||||
else:
|
||||
d_value = self._encode_value(value=o_value)
|
||||
|
||||
compare = 1024 * 128
|
||||
chunks = sys.getsizeof(d_value) / compare
|
||||
if chunks == 0:
|
||||
chunks = 1
|
||||
elif chunks > MAX_MEMCACHED_CHUNKS:
|
||||
self._failure(
|
||||
error='Memcache content too large',
|
||||
rc=1,
|
||||
msg='The content that you are attempting to cache is larger'
|
||||
' than [ %s ] megabytes.'
|
||||
% ((compare * MAX_MEMCACHED_CHUNKS / 1024 / 1024))
|
||||
)
|
||||
|
||||
step = len(d_value) / chunks
|
||||
if step == 0:
|
||||
step = 1
|
||||
|
||||
key_name = self.module.params['name']
|
||||
split_d_value = {}
|
||||
count = 0
|
||||
for i in range(0, len(d_value), step):
|
||||
split_d_value['%s.%s' % (key_name, count)] = d_value[i:i + step]
|
||||
count += 1
|
||||
|
||||
value = self.mc.set_multi(
|
||||
mapping=split_d_value,
|
||||
time=self.module.params['expires'],
|
||||
min_compress_len=2048
|
||||
)
|
||||
|
||||
if not value:
|
||||
self.state_change = True
|
||||
return {
|
||||
'present': True,
|
||||
'key': self.module.params['name']
|
||||
}
|
||||
else:
|
||||
self._failure(
|
||||
error='Memcache content not created',
|
||||
rc=1,
|
||||
msg='The content you attempted to place within memcached'
|
||||
' was not created. If you are load balancing'
|
||||
' memcached, attempt to connect to a single node.'
|
||||
' Returned a value of unstored keys [ %s ] - Original'
|
||||
' Connection [ %s ]'
|
||||
% (value, [i.__dict__ for i in self.mc.servers])
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main ansible run method."""
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
name=dict(
|
||||
type='str',
|
||||
required=True
|
||||
),
|
||||
content=dict(
|
||||
type='str',
|
||||
required=False
|
||||
),
|
||||
file_path=dict(
|
||||
type='str',
|
||||
required=False
|
||||
),
|
||||
state=dict(
|
||||
type='str',
|
||||
required=True
|
||||
),
|
||||
server=dict(
|
||||
type='str',
|
||||
required=True
|
||||
),
|
||||
expires=dict(
|
||||
type='int',
|
||||
default=300,
|
||||
required=False
|
||||
),
|
||||
file_mode=dict(
|
||||
type='str',
|
||||
default='0644',
|
||||
required=False
|
||||
),
|
||||
dir_mode=dict(
|
||||
type='str',
|
||||
default='0755',
|
||||
required=False
|
||||
),
|
||||
encrypt_string=dict(
|
||||
type='str',
|
||||
required=False
|
||||
)
|
||||
),
|
||||
supports_check_mode=False,
|
||||
mutually_exclusive=[
|
||||
['content', 'file_path']
|
||||
]
|
||||
)
|
||||
ms = Memcached(module=module)
|
||||
ms.router()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
79
library/name2int
Normal file
79
library/name2int
Normal file
@ -0,0 +1,79 @@
|
||||
#!/usr/bin/python
|
||||
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
|
||||
#
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import hashlib
|
||||
import platform
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: name2int
|
||||
version_added: "1.6.6"
|
||||
short_description:
|
||||
- hash a host name and return an integer
|
||||
description:
|
||||
- hash a host name and return an integer
|
||||
options:
|
||||
name:
|
||||
description:
|
||||
- login username
|
||||
required: true
|
||||
author: Kevin Carter
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
# Create a new container
|
||||
- name2int:
|
||||
name: "Some-hostname.com"
|
||||
"""
|
||||
|
||||
|
||||
class HashHostname(object):
|
||||
def __init__(self, module):
|
||||
"""Generate an integer from a name."""
|
||||
self.module = module
|
||||
|
||||
def return_hashed_host(self, name):
|
||||
hashed_name = hashlib.md5(name).hexdigest()
|
||||
hash_int = int(hashed_name, 32)
|
||||
real_int = int(hash_int % 300)
|
||||
return real_int
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
name=dict(
|
||||
required=True
|
||||
)
|
||||
),
|
||||
supports_check_mode=False
|
||||
)
|
||||
try:
|
||||
sm = HashHostname(module=module)
|
||||
int_value = sm.return_hashed_host(platform.node())
|
||||
resp = {'int_value': int_value}
|
||||
module.exit_json(changed=True, **resp)
|
||||
except Exception as exp:
|
||||
resp = {'stderr': exp}
|
||||
module.fail_json(msg='Failed Process', **resp)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
422
library/neutron
Normal file
422
library/neutron
Normal file
@ -0,0 +1,422 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import keystoneclient.v3.client as ksclient
|
||||
from neutronclient.neutron import client as nclient
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: neutron
|
||||
short_description:
|
||||
- Basic module for interacting with openstack neutron
|
||||
description:
|
||||
- Basic module for interacting with openstack neutron
|
||||
options:
|
||||
command:
|
||||
description:
|
||||
- Operation for the module to perform. Currently available
|
||||
choices:
|
||||
- create_network
|
||||
- create_subnet
|
||||
- create_router
|
||||
- add_router_interface
|
||||
required: True
|
||||
openrc_path:
|
||||
decription:
|
||||
- Path to openrc file from which credentials and keystone endpoint
|
||||
will be extracted
|
||||
net_name:
|
||||
description:
|
||||
- Name of network
|
||||
subnet_name:
|
||||
description:
|
||||
- Name of subnet
|
||||
router_name:
|
||||
description:
|
||||
- Name of router
|
||||
cidr:
|
||||
description:
|
||||
- Specify CIDR to use when creating subnet
|
||||
provider_physical_network:
|
||||
description:
|
||||
- Specify provider:physical_network when creating network
|
||||
provider_network_type:
|
||||
description:
|
||||
- Specify provider:network_type when creating network
|
||||
provider_segmentation_id:
|
||||
description:
|
||||
- Specify provider:segmentation_id when creating network
|
||||
router_external:
|
||||
description:
|
||||
- Specify router:external' when creating network
|
||||
external_gateway_info:
|
||||
description:
|
||||
- Specify external_gateway_info when creating router
|
||||
insecure:
|
||||
description:
|
||||
- Explicitly allow client to perform "insecure" TLS
|
||||
choices:
|
||||
- false
|
||||
- true
|
||||
default: false
|
||||
author: Hugh Saunders
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
- name: Create private network
|
||||
neutron:
|
||||
command: create_network
|
||||
openrc_path: /root/openrc
|
||||
net_name: private
|
||||
- name: Create public network
|
||||
neutron:
|
||||
command: create_network
|
||||
openrc_path: /root/openrc
|
||||
net_name: public
|
||||
provider_network_type: flat
|
||||
provider_physical_network: vlan
|
||||
router_external: true
|
||||
- name: Create private subnet
|
||||
neutron:
|
||||
command: create_subnet
|
||||
openrc_path: /root/openrc
|
||||
net_name: private
|
||||
subnet_name: private-subnet
|
||||
cidr: "192.168.74.0/24"
|
||||
- name: Create public subnet
|
||||
neutron:
|
||||
command: create_subnet
|
||||
openrc_path: /root/openrc
|
||||
net_name: public
|
||||
subnet_name: public-subnet
|
||||
cidr: "10.1.13.0/24"
|
||||
- name: Create router
|
||||
neutron:
|
||||
command: create_router
|
||||
openrc_path: /root/openrc
|
||||
router_name: router
|
||||
external_gateway_info: public
|
||||
- name: Add private subnet to router
|
||||
neutron:
|
||||
command: add_router_interface
|
||||
openrc_path: /root/openrc
|
||||
router_name: router
|
||||
subnet_name: private-subnet
|
||||
"""
|
||||
|
||||
|
||||
COMMAND_MAP = {
|
||||
'create_network': {
|
||||
'variables': [
|
||||
'net_name',
|
||||
'provider_physical_network',
|
||||
'provider_network_type',
|
||||
'provider_segmentation_id',
|
||||
'router_external',
|
||||
'tenant_id'
|
||||
]
|
||||
},
|
||||
'create_subnet': {
|
||||
'variables': [
|
||||
'net_name',
|
||||
'subnet_name',
|
||||
'cidr',
|
||||
'tenant_id'
|
||||
]
|
||||
},
|
||||
'create_router': {
|
||||
'variables': [
|
||||
'router_name',
|
||||
'external_gateway_info',
|
||||
'tenant_id'
|
||||
]
|
||||
},
|
||||
'add_router_interface': {
|
||||
'variables': [
|
||||
'router_name',
|
||||
'subnet_name'
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class ManageNeutron(object):
|
||||
def __init__(self, module):
|
||||
self.state_change = False
|
||||
self.neutron = None
|
||||
self.keystone = None
|
||||
self.module = module
|
||||
|
||||
def command_router(self):
|
||||
"""Run the command as its provided to the module."""
|
||||
command_name = self.module.params['command']
|
||||
if command_name not in COMMAND_MAP:
|
||||
self.failure(
|
||||
error='No Command Found',
|
||||
rc=2,
|
||||
msg='Command [ %s ] was not found.' % command_name
|
||||
)
|
||||
|
||||
action_command = COMMAND_MAP[command_name]
|
||||
if hasattr(self, '_%s' % command_name):
|
||||
action = getattr(self, '_%s' % command_name)
|
||||
try:
|
||||
self._keystone_authenticate()
|
||||
self._init_neutron()
|
||||
except Exception as e:
|
||||
self.module.fail_json(
|
||||
err="Initialisation Error: %s" % e,
|
||||
rc=2, msg=str(e))
|
||||
facts = action(variables=action_command['variables'])
|
||||
if facts is None:
|
||||
self.module.exit_json(changed=self.state_change)
|
||||
else:
|
||||
self.module.exit_json(
|
||||
changed=self.state_change,
|
||||
ansible_facts=facts
|
||||
)
|
||||
else:
|
||||
self.failure(
|
||||
error='Command not in ManageNeutron class',
|
||||
rc=2,
|
||||
msg='Method [ %s ] was not found.' % command_name
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _facts(resource_type, resource_data):
|
||||
"""Return a dict for our Ansible facts."""
|
||||
key = 'neutron_%s' % resource_type
|
||||
facts = {key: {}}
|
||||
for f in resource_data[resource_type]:
|
||||
res_name = f['name']
|
||||
del f['name']
|
||||
facts[key][res_name] = f
|
||||
|
||||
return facts
|
||||
|
||||
def _get_vars(self, variables, required=None):
|
||||
"""Return a dict of all variables as found within the module.
|
||||
|
||||
:param variables: ``list`` List of all variables that are available to
|
||||
use within the Neutron Command.
|
||||
:param required: ``list`` Name of variables that are required.
|
||||
"""
|
||||
return_dict = {}
|
||||
for variable in variables:
|
||||
return_dict[variable] = self.module.params.get(variable)
|
||||
else:
|
||||
if isinstance(required, list):
|
||||
for var_name in required:
|
||||
check = return_dict.get(var_name)
|
||||
if check is None:
|
||||
self.failure(
|
||||
error='Missing [ %s ] from Task or found a None'
|
||||
' value' % var_name,
|
||||
rc=000,
|
||||
msg='variables %s - available params [ %s ]'
|
||||
% (variables, self.module.params)
|
||||
)
|
||||
return return_dict
|
||||
|
||||
def failure(self, error, rc, msg):
|
||||
"""Return a Failure when running an Ansible command.
|
||||
|
||||
:param error: ``str`` Error that occurred.
|
||||
:param rc: ``int`` Return code while executing an Ansible command.
|
||||
:param msg: ``str`` Message to report.
|
||||
"""
|
||||
self.module.fail_json(msg=msg, rc=rc, err=error)
|
||||
|
||||
def _parse_openrc(self):
|
||||
"""Get credentials from an openrc file."""
|
||||
openrc_path = self.module.params['openrc_path']
|
||||
line_re = re.compile('^export (?P<key>OS_\w*)=(?P<value>[^\n]*)')
|
||||
with open(openrc_path) as openrc:
|
||||
matches = [line_re.match(l) for l in openrc]
|
||||
return dict(
|
||||
(g.groupdict()['key'], g.groupdict()['value'])
|
||||
for g in matches if g
|
||||
)
|
||||
|
||||
def _keystone_authenticate(self):
|
||||
"""Authenticate with Keystone."""
|
||||
openrc = self._parse_openrc()
|
||||
insecure = self.module.params['insecure']
|
||||
self.keystone = ksclient.Client(insecure=insecure,
|
||||
username=openrc['OS_USERNAME'],
|
||||
password=openrc['OS_PASSWORD'],
|
||||
project_name=openrc['OS_PROJECT_NAME'],
|
||||
auth_url=openrc['OS_AUTH_URL'])
|
||||
|
||||
def _init_neutron(self):
|
||||
"""Create neutron client object using token and url from keystone."""
|
||||
openrc = self._parse_openrc()
|
||||
self.neutron = nclient.Client(
|
||||
'2.0',
|
||||
endpoint_url=self.keystone.service_catalog.url_for(
|
||||
service_type='network',
|
||||
endpoint_type=openrc['OS_ENDPOINT_TYPE']),
|
||||
token=self.keystone.get_token(self.keystone.session))
|
||||
|
||||
def _get_resource_by_name(self, resource_type, resource_name):
|
||||
action = getattr(self.neutron, 'list_%s' % resource_type)
|
||||
resource = action(name=resource_name)[resource_type]
|
||||
|
||||
if resource:
|
||||
return resource[0]['id']
|
||||
else:
|
||||
return None
|
||||
|
||||
def _create_network(self, variables):
|
||||
required_vars = ['net_name']
|
||||
variables_dict = self._get_vars(variables, required=required_vars)
|
||||
net_name = variables_dict.pop('net_name')
|
||||
provider_physical_network = variables_dict.pop(
|
||||
'provider_physical_network'
|
||||
)
|
||||
provider_network_type = variables_dict.pop('provider_network_type')
|
||||
provider_segmentation_id = variables_dict.pop(
|
||||
'provider_segmentation_id'
|
||||
)
|
||||
router_external = variables_dict.pop('router_external')
|
||||
tenant_id = variables_dict.pop('tenant_id')
|
||||
|
||||
if not self._get_resource_by_name('networks', net_name):
|
||||
n = {"name": net_name, "admin_state_up": True}
|
||||
if provider_physical_network:
|
||||
n['provider:physical_network'] = provider_physical_network
|
||||
if provider_network_type:
|
||||
n['provider:network_type'] = provider_network_type
|
||||
if provider_segmentation_id:
|
||||
n['provider:segmentation_id'] = str(provider_segmentation_id)
|
||||
if router_external:
|
||||
n['router:external'] = router_external
|
||||
if tenant_id:
|
||||
n['tenant_id'] = tenant_id
|
||||
|
||||
self.state_change = True
|
||||
self.neutron.create_network({"network": n})
|
||||
|
||||
return self._facts('networks', self.neutron.list_networks())
|
||||
|
||||
def _create_subnet(self, variables):
|
||||
required_vars = ['net_name', 'subnet_name', 'cidr']
|
||||
variables_dict = self._get_vars(variables, required=required_vars)
|
||||
net_name = variables_dict.pop('net_name')
|
||||
subnet_name = variables_dict.pop('subnet_name')
|
||||
cidr = variables_dict.pop('cidr')
|
||||
network_id = self._get_resource_by_name('networks', net_name)
|
||||
tenant_id = variables_dict.pop('tenant_id')
|
||||
|
||||
if not network_id:
|
||||
self.failure(
|
||||
error='Network not found',
|
||||
rc=1,
|
||||
msg='The specified network could not be found'
|
||||
)
|
||||
if not self.neutron.list_subnets(cidr=cidr,
|
||||
network_id=network_id)['subnets']:
|
||||
self.state_change = True
|
||||
s = {"name": subnet_name, "cidr": cidr, "ip_version": 4,
|
||||
"network_id": network_id}
|
||||
if tenant_id:
|
||||
s["tenant_id"] = tenant_id
|
||||
self.neutron.create_subnet({"subnet": s})
|
||||
return self._facts('subnets', self.neutron.list_subnets())
|
||||
|
||||
def _create_router(self, variables):
|
||||
required_vars = ['router_name', 'external_gateway_info']
|
||||
variables_dict = self._get_vars(variables, required=required_vars)
|
||||
router_name = variables_dict.pop('router_name')
|
||||
external_gateway_info = variables_dict.pop('external_gateway_info')
|
||||
tenant_id = variables_dict.pop('tenant_id')
|
||||
|
||||
if not self._get_resource_by_name('routers', router_name):
|
||||
self.state_change = True
|
||||
r = {'name': router_name}
|
||||
if external_gateway_info:
|
||||
network_id = self._get_resource_by_name('networks',
|
||||
external_gateway_info)
|
||||
r['external_gateway_info'] = {'network_id': network_id}
|
||||
if tenant_id:
|
||||
r['tenant_id'] = tenant_id
|
||||
self.neutron.create_router({'router': r})
|
||||
|
||||
return self._facts('routers', self.neutron.list_routers())
|
||||
|
||||
def _add_router_interface(self, variables):
|
||||
required_vars = ['router_name', 'subnet_name']
|
||||
variables_dict = self._get_vars(variables, required=required_vars)
|
||||
router_name = variables_dict.pop('router_name')
|
||||
subnet_name = variables_dict.pop('subnet_name')
|
||||
router_id = self._get_resource_by_name('routers', router_name)
|
||||
subnet_id = self._get_resource_by_name('subnets', subnet_name)
|
||||
|
||||
if not router_id:
|
||||
self.failure(
|
||||
error='Router not found',
|
||||
rc=1,
|
||||
msg='The specified router could not be found'
|
||||
)
|
||||
|
||||
if not subnet_id:
|
||||
self.failure(
|
||||
error='Subnet not found',
|
||||
rc=1,
|
||||
msg='The specified subnet could not be found'
|
||||
)
|
||||
|
||||
found = False
|
||||
|
||||
for port in self.neutron.list_ports(device_id=router_id)['ports']:
|
||||
for fixed_ips in port['fixed_ips']:
|
||||
if fixed_ips['subnet_id'] == subnet_id:
|
||||
found = True
|
||||
if not found:
|
||||
self.state_change = True
|
||||
self.neutron.add_interface_router(router_id,
|
||||
{'subnet_id': subnet_id})
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
command=dict(required=True, choices=COMMAND_MAP.keys()),
|
||||
openrc_path=dict(required=True),
|
||||
net_name=dict(required=False),
|
||||
subnet_name=dict(required=False),
|
||||
cidr=dict(required=False),
|
||||
provider_physical_network=dict(required=False),
|
||||
provider_network_type=dict(required=False),
|
||||
provider_segmentation_id=dict(required=False),
|
||||
router_external=dict(required=False),
|
||||
router_name=dict(required=False),
|
||||
external_gateway_info=dict(required=False),
|
||||
tenant_id=dict(required=False),
|
||||
insecure=dict(default=False, required=False,
|
||||
choices=BOOLEANS + ['True', 'False'])
|
||||
),
|
||||
supports_check_mode=False
|
||||
)
|
||||
mn = ManageNeutron(module)
|
||||
mn.command_router()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
283
library/provider_networks
Normal file
283
library/provider_networks
Normal file
@ -0,0 +1,283 @@
|
||||
#!/usr/bin/python
|
||||
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
|
||||
#
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
|
||||
DOCUMENTATION = """
|
||||
---
|
||||
module: provider_networks
|
||||
version_added: "1.8.4"
|
||||
short_description:
|
||||
- Parse a list of networks and return data that Ansible can use
|
||||
description:
|
||||
- Parse a list of networks and return data that Ansible can use
|
||||
options:
|
||||
provider_networks:
|
||||
description:
|
||||
- List of networks to parse
|
||||
required: true
|
||||
is_metal:
|
||||
description:
|
||||
- Enable handling of on metal hosts
|
||||
required: false
|
||||
bind_prefix:
|
||||
description:
|
||||
- Add a prefix to all network interfaces.
|
||||
required: false
|
||||
author: Kevin Carter
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
## This is what the provider_networks list should look like.
|
||||
# provider_networks:
|
||||
# - network:
|
||||
# container_bridge: "br-mgmt"
|
||||
# container_type: "veth"
|
||||
# container_interface: "eth1"
|
||||
# ip_from_q: "container"
|
||||
# type: "raw"
|
||||
# group_binds:
|
||||
# - all_containers
|
||||
# - hosts
|
||||
# is_container_address: true
|
||||
# is_ssh_address: true
|
||||
# - network:
|
||||
# container_bridge: "br-vxlan"
|
||||
# container_type: "veth"
|
||||
# container_interface: "eth10"
|
||||
# ip_from_q: "tunnel"
|
||||
# type: "vxlan"
|
||||
# range: "1:1000"
|
||||
# net_name: "vxlan"
|
||||
# group_binds:
|
||||
# - neutron_linuxbridge_agent
|
||||
# - network:
|
||||
# container_bridge: "br-vlan"
|
||||
# container_type: "veth"
|
||||
# container_interface: "eth12"
|
||||
# host_bind_override: "eth12"
|
||||
# type: "flat"
|
||||
# net_name: "flat"
|
||||
# group_binds:
|
||||
# - neutron_linuxbridge_agent
|
||||
# - network:
|
||||
# container_bridge: "br-vlan"
|
||||
# container_type: "veth"
|
||||
# container_interface: "eth11"
|
||||
# host_bind_override: "eth11"
|
||||
# type: "vlan"
|
||||
# range: "1:1, 101:101"
|
||||
# net_name: "vlan"
|
||||
# group_binds:
|
||||
# - neutron_linuxbridge_agent
|
||||
# - network:
|
||||
# container_bridge: "br-storage"
|
||||
# container_type: "veth"
|
||||
# container_interface: "eth2"
|
||||
# ip_from_q: "storage"
|
||||
# type: "raw"
|
||||
# group_binds:
|
||||
# - glance_api
|
||||
# - cinder_api
|
||||
# - cinder_volume
|
||||
# - nova_compute
|
||||
# - swift_proxy
|
||||
|
||||
|
||||
- name: Test provider networks
|
||||
provider_networks:
|
||||
provider_networks: "{{ provider_networks }}"
|
||||
register: pndata1
|
||||
|
||||
- name: Test provider networks is metal
|
||||
provider_networks:
|
||||
provider_networks: "{{ provider_networks }}"
|
||||
is_metal: true
|
||||
register: pndata2
|
||||
|
||||
- name: Test provider networks with prfix
|
||||
provider_networks:
|
||||
provider_networks: "{{ provider_networks }}"
|
||||
bind_prefix: "brx"
|
||||
is_metal: true
|
||||
register: pndata3
|
||||
|
||||
## Module output:
|
||||
# {
|
||||
# "network_flat_networks": "flat",
|
||||
# "network_flat_networks_list": [
|
||||
# "flat"
|
||||
# ],
|
||||
# "network_mappings": "flat:brx-eth12,vlan:brx-eth11",
|
||||
# "network_mappings_list": [
|
||||
# "flat:brx-eth12",
|
||||
# "vlan:brx-eth11"
|
||||
# ],
|
||||
# "network_types": "vxlan,flat,vlan",
|
||||
# "network_types_list": [
|
||||
# "vxlan",
|
||||
# "flat",
|
||||
# "vlan"
|
||||
# ],
|
||||
# "network_vlan_ranges": "vlan:1:1,vlan:1024:1025",
|
||||
# "network_vlan_ranges_list": [
|
||||
# "vlan:1:1",
|
||||
# "vlan:1024:1025"
|
||||
# ],
|
||||
# "network_vxlan_ranges": "1:1000",
|
||||
# "network_vxlan_ranges_list": [
|
||||
# "1:1000"
|
||||
# ]
|
||||
# }
|
||||
"""
|
||||
|
||||
|
||||
class ProviderNetworksParsing(object):
|
||||
def __init__(self, module):
|
||||
"""Generate an integer from a name.
|
||||
|
||||
:param module: Load the ansible module
|
||||
:type module: ``object``
|
||||
"""
|
||||
self.module = module
|
||||
self.network_vlan_ranges = list()
|
||||
self.network_vxlan_ranges = list()
|
||||
self.network_flat_networks = list()
|
||||
self.network_mappings = list()
|
||||
self.network_types = list()
|
||||
|
||||
def load_networks(self, provider_networks, is_metal=False,
|
||||
bind_prefix=None):
|
||||
"""Load the lists of network and network data types.
|
||||
|
||||
:param provider_networks: list of networks defined in user_config
|
||||
:type provider_networks: ``list``
|
||||
:param is_metal: Enable of disable handling of on metal nodes
|
||||
:type is_metal: ``bol``
|
||||
:param bind_prefix: Pre-interface prefix forced within the network map
|
||||
:type bind_prefix: ``str``
|
||||
"""
|
||||
|
||||
for net in provider_networks:
|
||||
if net['network']['type'] == "vlan":
|
||||
if "vlan" not in self.network_types:
|
||||
self.network_types.append('vlan')
|
||||
for vlan_range in net['network']['range'].split(','):
|
||||
self.network_vlan_ranges.append(
|
||||
'%s:%s' % (
|
||||
net['network']['net_name'], vlan_range.strip()
|
||||
)
|
||||
)
|
||||
elif net['network']['type'] == "vxlan":
|
||||
if "vxlan" not in self.network_types:
|
||||
self.network_types.append('vxlan')
|
||||
self.network_vxlan_ranges.append(net['network']['range'])
|
||||
elif net['network']['type'] == "flat":
|
||||
if "flat" not in self.network_types:
|
||||
self.network_types.append('flat')
|
||||
self.network_flat_networks.append(
|
||||
net['network']['net_name']
|
||||
)
|
||||
|
||||
# Create the network mappings
|
||||
if net['network']['type'] not in ['raw', 'vxlan']:
|
||||
if 'net_name' in net['network']:
|
||||
if is_metal:
|
||||
if 'host_bind_override' in net['network']:
|
||||
bind_device = net['network']['host_bind_override']
|
||||
else:
|
||||
bind_device = net['network']['container_bridge']
|
||||
else:
|
||||
bind_device = net['network']['container_interface']
|
||||
|
||||
if bind_prefix:
|
||||
bind_device = '%s-%s' % (bind_prefix, bind_device)
|
||||
|
||||
self.network_mappings.append(
|
||||
'%s:%s' % (
|
||||
net['network']['net_name'],
|
||||
bind_device
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
# Add in python True False
|
||||
BOOLEANS.extend(['False', 'True'])
|
||||
BOOLEANS_TRUE.append('True')
|
||||
BOOLEANS_FALSE.append('False')
|
||||
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
provider_networks=dict(
|
||||
type='list',
|
||||
required=True
|
||||
),
|
||||
is_metal=dict(
|
||||
choices=BOOLEANS,
|
||||
default='false'
|
||||
),
|
||||
bind_prefix=dict(
|
||||
type='str',
|
||||
required=False,
|
||||
default=None
|
||||
)
|
||||
),
|
||||
supports_check_mode=False
|
||||
)
|
||||
|
||||
try:
|
||||
is_metal = module.params.get('is_metal')
|
||||
if is_metal in BOOLEANS_TRUE:
|
||||
module.params['is_metal'] = True
|
||||
else:
|
||||
module.params['is_metal'] = False
|
||||
|
||||
pnp = ProviderNetworksParsing(module=module)
|
||||
pnp.load_networks(
|
||||
provider_networks=module.params.get('provider_networks'),
|
||||
is_metal=module.params.get('is_metal'),
|
||||
bind_prefix=module.params.get('bind_prefix')
|
||||
)
|
||||
|
||||
# Response dictionary, this adds commas to all list items in string
|
||||
# format as well as preserves the list functionality for future data
|
||||
# processing.
|
||||
resp = {
|
||||
'network_vlan_ranges': ','.join(pnp.network_vlan_ranges),
|
||||
'network_vlan_ranges_list': pnp.network_vlan_ranges,
|
||||
'network_vxlan_ranges': ','.join(pnp.network_vxlan_ranges),
|
||||
'network_vxlan_ranges_list': pnp.network_vxlan_ranges,
|
||||
'network_flat_networks': ','.join(pnp.network_flat_networks),
|
||||
'network_flat_networks_list': pnp.network_flat_networks,
|
||||
'network_mappings': ','.join(pnp.network_mappings),
|
||||
'network_mappings_list': pnp.network_mappings,
|
||||
'network_types': ','.join(pnp.network_types),
|
||||
'network_types_list': pnp.network_types
|
||||
}
|
||||
|
||||
module.exit_json(changed=True, **resp)
|
||||
except Exception as exp:
|
||||
resp = {'stderr': exp}
|
||||
module.fail_json(msg='Failed Process', **resp)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
615
lookups/py_pkgs.py
Normal file
615
lookups/py_pkgs.py
Normal file
@ -0,0 +1,615 @@
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
|
||||
|
||||
import os
|
||||
import re
|
||||
import traceback
|
||||
|
||||
from distutils.version import LooseVersion
|
||||
from ansible import __version__ as __ansible_version__
|
||||
import yaml
|
||||
|
||||
|
||||
# Used to keep track of git package parts as various files are processed
|
||||
GIT_PACKAGE_DEFAULT_PARTS = dict()
|
||||
|
||||
|
||||
ROLE_PACKAGES = dict()
|
||||
|
||||
|
||||
REQUIREMENTS_FILE_TYPES = [
|
||||
'global-requirements.txt',
|
||||
'test-requirements.txt',
|
||||
'dev-requirements.txt',
|
||||
'requirements.txt',
|
||||
'global-requirement-pins.txt'
|
||||
]
|
||||
|
||||
|
||||
# List of variable names that could be used within the yaml files that
|
||||
# represent lists of python packages.
|
||||
BUILT_IN_PIP_PACKAGE_VARS = [
|
||||
'service_pip_dependencies',
|
||||
'pip_common_packages',
|
||||
'pip_container_packages',
|
||||
'pip_packages'
|
||||
]
|
||||
|
||||
|
||||
PACKAGE_MAPPING = {
|
||||
'packages': set(),
|
||||
'remote_packages': set(),
|
||||
'remote_package_parts': list(),
|
||||
'role_packages': dict()
|
||||
}
|
||||
|
||||
|
||||
def map_base_and_remote_packages(package, package_map):
|
||||
"""Determine whether a package is a base package or a remote package
|
||||
and add to the appropriate set.
|
||||
|
||||
:type package: ``str``
|
||||
:type package_map: ``dict``
|
||||
"""
|
||||
if package.startswith(('http:', 'https:', 'git+')):
|
||||
if '@' not in package:
|
||||
package_map['packages'].add(package)
|
||||
else:
|
||||
git_parts = git_pip_link_parse(package)
|
||||
package_name = git_parts[-1]
|
||||
if not package_name:
|
||||
package_name = git_pip_link_parse(package)[0]
|
||||
|
||||
for rpkg in list(package_map['remote_packages']):
|
||||
rpkg_name = git_pip_link_parse(rpkg)[-1]
|
||||
if not rpkg_name:
|
||||
rpkg_name = git_pip_link_parse(package)[0]
|
||||
|
||||
if rpkg_name == package_name:
|
||||
package_map['remote_packages'].remove(rpkg)
|
||||
package_map['remote_packages'].add(package)
|
||||
break
|
||||
else:
|
||||
package_map['remote_packages'].add(package)
|
||||
else:
|
||||
package_map['packages'].add(package)
|
||||
|
||||
|
||||
def parse_remote_package_parts(package_map):
|
||||
"""Parse parts of each remote package and add them to
|
||||
the remote_package_parts list.
|
||||
|
||||
:type package_map: ``dict``
|
||||
"""
|
||||
keys = [
|
||||
'name',
|
||||
'version',
|
||||
'fragment',
|
||||
'url',
|
||||
'original',
|
||||
'egg_name'
|
||||
]
|
||||
remote_pkg_parts = [
|
||||
dict(
|
||||
zip(
|
||||
keys, git_pip_link_parse(i)
|
||||
)
|
||||
) for i in package_map['remote_packages']
|
||||
]
|
||||
package_map['remote_package_parts'].extend(remote_pkg_parts)
|
||||
package_map['remote_package_parts'] = list(
|
||||
dict(
|
||||
(i['name'], i)
|
||||
for i in package_map['remote_package_parts']
|
||||
).values()
|
||||
)
|
||||
|
||||
|
||||
def map_role_packages(package_map):
|
||||
"""Add and sort packages belonging to a role to the role_packages dict.
|
||||
|
||||
:type package_map: ``dict``
|
||||
"""
|
||||
for k, v in ROLE_PACKAGES.items():
|
||||
role_pkgs = package_map['role_packages'][k] = list()
|
||||
for pkg_list in v.values():
|
||||
role_pkgs.extend(pkg_list)
|
||||
else:
|
||||
package_map['role_packages'][k] = sorted(set(role_pkgs))
|
||||
|
||||
|
||||
def map_base_package_details(package_map):
|
||||
"""Parse package version and marker requirements and add to the
|
||||
base packages set.
|
||||
|
||||
:type package_map: ``dict``
|
||||
"""
|
||||
check_pkgs = dict()
|
||||
base_packages = sorted(list(package_map['packages']))
|
||||
for pkg in base_packages:
|
||||
name, versions, markers = _pip_requirement_split(pkg)
|
||||
if versions and markers:
|
||||
versions = '%s;%s' % (versions, markers)
|
||||
elif not versions and markers:
|
||||
versions = ';%s' % markers
|
||||
|
||||
if name in check_pkgs:
|
||||
if versions and not check_pkgs[name]:
|
||||
check_pkgs[name] = versions
|
||||
else:
|
||||
check_pkgs[name] = versions
|
||||
else:
|
||||
return_pkgs = list()
|
||||
for k, v in check_pkgs.items():
|
||||
if v:
|
||||
return_pkgs.append('%s%s' % (k, v))
|
||||
else:
|
||||
return_pkgs.append(k)
|
||||
package_map['packages'] = set(return_pkgs)
|
||||
|
||||
|
||||
def git_pip_link_parse(repo):
|
||||
"""Return a tuple containing the parts of a git repository.
|
||||
|
||||
Example parsing a standard git repo:
|
||||
>>> git_pip_link_parse('git+https://github.com/username/repo-name@tag')
|
||||
('repo-name',
|
||||
'tag',
|
||||
None,
|
||||
'https://github.com/username/repo',
|
||||
'git+https://github.com/username/repo@tag',
|
||||
'repo_name')
|
||||
|
||||
Example parsing a git repo that uses an installable from a subdirectory:
|
||||
>>> git_pip_link_parse(
|
||||
... 'git+https://github.com/username/repo@tag#egg=plugin.name'
|
||||
... '&subdirectory=remote_path/plugin.name'
|
||||
... )
|
||||
('plugin.name',
|
||||
'tag',
|
||||
'remote_path/plugin.name',
|
||||
'https://github.com/username/repo',
|
||||
'git+https://github.com/username/repo@tag#egg=plugin.name&'
|
||||
'subdirectory=remote_path/plugin.name',
|
||||
'plugin.name')
|
||||
|
||||
:param repo: git repo string to parse.
|
||||
:type repo: ``str``
|
||||
:returns: ``tuple``
|
||||
"""'meta'
|
||||
|
||||
def _meta_return(meta_data, item):
|
||||
"""Return the value of an item in meta data."""
|
||||
|
||||
return meta_data.lstrip('#').split('%s=' % item)[-1].split('&')[0]
|
||||
|
||||
_git_url = repo.split('+')
|
||||
if len(_git_url) >= 2:
|
||||
_git_url = _git_url[1]
|
||||
else:
|
||||
_git_url = _git_url[0]
|
||||
|
||||
git_branch_sha = _git_url.split('@')
|
||||
if len(git_branch_sha) > 2:
|
||||
branch = git_branch_sha.pop()
|
||||
url = '@'.join(git_branch_sha)
|
||||
elif len(git_branch_sha) > 1:
|
||||
url, branch = git_branch_sha
|
||||
else:
|
||||
url = git_branch_sha[0]
|
||||
branch = 'master'
|
||||
|
||||
egg_name = name = os.path.basename(url.rstrip('/'))
|
||||
egg_name = egg_name.replace('-', '_')
|
||||
|
||||
_branch = branch.split('#')
|
||||
branch = _branch[0]
|
||||
|
||||
plugin_path = None
|
||||
# Determine if the package is a plugin type
|
||||
if len(_branch) > 1:
|
||||
if 'subdirectory=' in _branch[-1]:
|
||||
plugin_path = _meta_return(_branch[-1], 'subdirectory')
|
||||
name = os.path.basename(plugin_path)
|
||||
|
||||
if 'egg=' in _branch[-1]:
|
||||
egg_name = _meta_return(_branch[-1], 'egg')
|
||||
egg_name = egg_name.replace('-', '_')
|
||||
|
||||
if 'gitname=' in _branch[-1]:
|
||||
name = _meta_return(_branch[-1], 'gitname')
|
||||
|
||||
return name.lower(), branch, plugin_path, url, repo, egg_name
|
||||
|
||||
|
||||
def _pip_requirement_split(requirement):
|
||||
"""Split pip versions from a given requirement.
|
||||
|
||||
The method will return the package name, versions, and any markers.
|
||||
|
||||
:type requirement: ``str``
|
||||
:returns: ``tuple``
|
||||
"""
|
||||
version_descriptors = "(>=|<=|>|<|==|~=|!=)"
|
||||
requirement = requirement.split(';')
|
||||
requirement_info = re.split(r'%s\s*' % version_descriptors, requirement[0])
|
||||
name = requirement_info[0]
|
||||
marker = None
|
||||
if len(requirement) > 1:
|
||||
marker = requirement[-1]
|
||||
versions = None
|
||||
if len(requirement_info) > 1:
|
||||
versions = ''.join(requirement_info[1:])
|
||||
|
||||
return name, versions, marker
|
||||
|
||||
|
||||
class DependencyFileProcessor(object):
|
||||
def __init__(self, local_path):
|
||||
"""Find required files.
|
||||
|
||||
:type local_path: ``str``
|
||||
:return:
|
||||
"""
|
||||
self.pip = dict()
|
||||
self.pip['git_package'] = list()
|
||||
self.pip['py_package'] = list()
|
||||
self.pip['git_data'] = list()
|
||||
self.git_pip_install = 'git+%s@%s'
|
||||
self.file_names = self._get_files(path=local_path)
|
||||
|
||||
# Process everything simply by calling the method
|
||||
self._process_files()
|
||||
|
||||
def _py_pkg_extend(self, packages):
|
||||
for pkg in packages:
|
||||
pkg_name = _pip_requirement_split(pkg)[0]
|
||||
for py_pkg in self.pip['py_package']:
|
||||
py_pkg_name = _pip_requirement_split(py_pkg)[0]
|
||||
if pkg_name == py_pkg_name:
|
||||
self.pip['py_package'].remove(py_pkg)
|
||||
else:
|
||||
self.pip['py_package'].extend([i.lower() for i in packages])
|
||||
|
||||
@staticmethod
|
||||
def _filter_files(file_names, ext):
|
||||
"""Filter the files and return a sorted list.
|
||||
|
||||
:type file_names:
|
||||
:type ext: ``str`` or ``tuple``
|
||||
:returns: ``list``
|
||||
"""
|
||||
_file_names = list()
|
||||
file_name_words = ['/defaults/', '/vars/', '/user_']
|
||||
file_name_words.extend(REQUIREMENTS_FILE_TYPES)
|
||||
for file_name in file_names:
|
||||
if file_name.endswith(ext):
|
||||
if any(i in file_name for i in file_name_words):
|
||||
_file_names.append(file_name)
|
||||
else:
|
||||
return _file_names
|
||||
|
||||
@staticmethod
|
||||
def _get_files(path):
|
||||
"""Return a list of all files in the defaults/repo_packages directory.
|
||||
|
||||
:type path: ``str``
|
||||
:returns: ``list``
|
||||
"""
|
||||
paths = os.walk(os.path.abspath(path))
|
||||
files = list()
|
||||
for fpath, _, afiles in paths:
|
||||
for afile in afiles:
|
||||
files.append(os.path.join(fpath, afile))
|
||||
else:
|
||||
return files
|
||||
|
||||
def _check_plugins(self, git_repo_plugins, git_data):
|
||||
"""Check if the git url is a plugin type.
|
||||
|
||||
:type git_repo_plugins: ``dict``
|
||||
:type git_data: ``dict``
|
||||
"""
|
||||
for repo_plugin in git_repo_plugins:
|
||||
strip_plugin_path = repo_plugin['package'].lstrip('/')
|
||||
plugin = '%s/%s' % (
|
||||
repo_plugin['path'].strip('/'),
|
||||
strip_plugin_path
|
||||
)
|
||||
|
||||
name = git_data['name'] = os.path.basename(strip_plugin_path)
|
||||
git_data['egg_name'] = name.replace('-', '_')
|
||||
package = self.git_pip_install % (
|
||||
git_data['repo'], git_data['branch']
|
||||
)
|
||||
package += '#egg=%s' % git_data['egg_name']
|
||||
package += '&subdirectory=%s' % plugin
|
||||
package += '&gitname=%s' % name
|
||||
if git_data['fragments']:
|
||||
package += '&%s' % git_data['fragments']
|
||||
|
||||
self.pip['git_data'].append(git_data)
|
||||
self.pip['git_package'].append(package)
|
||||
|
||||
if name not in GIT_PACKAGE_DEFAULT_PARTS:
|
||||
GIT_PACKAGE_DEFAULT_PARTS[name] = git_data.copy()
|
||||
else:
|
||||
GIT_PACKAGE_DEFAULT_PARTS[name].update(git_data.copy())
|
||||
|
||||
@staticmethod
|
||||
def _check_defaults(git_data, name, item):
|
||||
"""Check if a default exists and use it if an item is undefined.
|
||||
|
||||
:type git_data: ``dict``
|
||||
:type name: ``str``
|
||||
:type item: ``str``
|
||||
"""
|
||||
if not git_data[item] and name in GIT_PACKAGE_DEFAULT_PARTS:
|
||||
check_item = GIT_PACKAGE_DEFAULT_PARTS[name].get(item)
|
||||
if check_item:
|
||||
git_data[item] = check_item
|
||||
|
||||
def _process_git(self, loaded_yaml, git_item):
|
||||
"""Process git repos.
|
||||
|
||||
:type loaded_yaml: ``dict``
|
||||
:type git_item: ``str``
|
||||
"""
|
||||
git_data = dict()
|
||||
if git_item.split('_')[0] == 'git':
|
||||
prefix = ''
|
||||
else:
|
||||
prefix = '%s_' % git_item.split('_git_repo')[0].replace('.', '_')
|
||||
|
||||
# Set the various variable definitions
|
||||
repo_var = prefix + 'git_repo'
|
||||
name_var = prefix + 'git_package_name'
|
||||
branch_var = prefix + 'git_install_branch'
|
||||
fragment_var = prefix + 'git_install_fragments'
|
||||
plugins_var = prefix + 'repo_plugins'
|
||||
|
||||
# get the repo definition
|
||||
git_data['repo'] = loaded_yaml.get(repo_var)
|
||||
|
||||
# get the repo name definition
|
||||
name = git_data['name'] = loaded_yaml.get(name_var)
|
||||
if not name:
|
||||
name = git_data['name'] = os.path.basename(
|
||||
git_data['repo'].rstrip('/')
|
||||
)
|
||||
git_data['egg_name'] = name.replace('-', '_')
|
||||
|
||||
# get the repo branch definition
|
||||
git_data['branch'] = loaded_yaml.get(branch_var)
|
||||
self._check_defaults(git_data, name, 'branch')
|
||||
if not git_data['branch']:
|
||||
git_data['branch'] = 'master'
|
||||
|
||||
package = self.git_pip_install % (git_data['repo'], git_data['branch'])
|
||||
|
||||
# get the repo fragment definitions, if any
|
||||
git_data['fragments'] = loaded_yaml.get(fragment_var)
|
||||
self._check_defaults(git_data, name, 'fragments')
|
||||
|
||||
package += '#egg=%s' % git_data['egg_name']
|
||||
package += '&gitname=%s' % name
|
||||
if git_data['fragments']:
|
||||
package += '&%s' % git_data['fragments']
|
||||
|
||||
self.pip['git_package'].append(package)
|
||||
self.pip['git_data'].append(git_data.copy())
|
||||
|
||||
# Set the default package parts to track data during the run
|
||||
if name not in GIT_PACKAGE_DEFAULT_PARTS:
|
||||
GIT_PACKAGE_DEFAULT_PARTS[name] = git_data.copy()
|
||||
else:
|
||||
GIT_PACKAGE_DEFAULT_PARTS[name].update()
|
||||
|
||||
# get the repo plugin definitions, if any
|
||||
git_data['plugins'] = loaded_yaml.get(plugins_var)
|
||||
self._check_defaults(git_data, name, 'plugins')
|
||||
if git_data['plugins']:
|
||||
self._check_plugins(
|
||||
git_repo_plugins=git_data['plugins'],
|
||||
git_data=git_data
|
||||
)
|
||||
|
||||
def _process_files(self):
|
||||
"""Process files."""
|
||||
|
||||
role_name = None
|
||||
for file_name in self._filter_files(self.file_names, ('yaml', 'yml')):
|
||||
with open(file_name, 'r') as f:
|
||||
# If there is an exception loading the file continue
|
||||
# and if the loaded_config is None continue. This makes
|
||||
# no bad config gets passed to the rest of the process.
|
||||
try:
|
||||
loaded_config = yaml.safe_load(f.read())
|
||||
except Exception: # Broad exception so everything is caught
|
||||
continue
|
||||
else:
|
||||
if not loaded_config:
|
||||
continue
|
||||
|
||||
if 'roles' in file_name:
|
||||
_role_name = file_name.split('roles%s' % os.sep)[-1]
|
||||
role_name = _role_name.split(os.sep)[0]
|
||||
|
||||
for key, values in loaded_config.items():
|
||||
# This conditional is set to ensure we're not processes git
|
||||
# repos from the defaults file which may conflict with what is
|
||||
# being set in the repo_packages files.
|
||||
if '/defaults/main' not in file_name:
|
||||
if key.endswith('git_repo'):
|
||||
self._process_git(
|
||||
loaded_yaml=loaded_config,
|
||||
git_item=key
|
||||
)
|
||||
|
||||
if [i for i in BUILT_IN_PIP_PACKAGE_VARS if i in key]:
|
||||
self._py_pkg_extend(values)
|
||||
if role_name:
|
||||
if role_name in ROLE_PACKAGES:
|
||||
role_pkgs = ROLE_PACKAGES[role_name]
|
||||
else:
|
||||
role_pkgs = ROLE_PACKAGES[role_name] = dict()
|
||||
|
||||
pkgs = role_pkgs.get(key, list())
|
||||
if 'optional' not in key:
|
||||
pkgs.extend(values)
|
||||
ROLE_PACKAGES[role_name][key] = pkgs
|
||||
else:
|
||||
for k, v in ROLE_PACKAGES.items():
|
||||
for item_name in v.keys():
|
||||
if key == item_name:
|
||||
ROLE_PACKAGES[k][item_name].extend(values)
|
||||
|
||||
for file_name in self._filter_files(self.file_names, 'txt'):
|
||||
if os.path.basename(file_name) in REQUIREMENTS_FILE_TYPES:
|
||||
with open(file_name, 'r') as f:
|
||||
packages = [
|
||||
i.split()[0] for i in f.read().splitlines()
|
||||
if i
|
||||
if not i.startswith('#')
|
||||
]
|
||||
self._py_pkg_extend(packages)
|
||||
|
||||
|
||||
def _abs_path(path):
|
||||
return os.path.abspath(
|
||||
os.path.expanduser(
|
||||
path
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class LookupModule(object):
|
||||
def __new__(class_name, *args, **kwargs):
|
||||
if LooseVersion(__ansible_version__) < LooseVersion("2.0"):
|
||||
from ansible import utils, errors
|
||||
|
||||
class LookupModuleV1(object):
|
||||
def __init__(self, basedir=None, **kwargs):
|
||||
"""Run the lookup module.
|
||||
|
||||
:type basedir:
|
||||
:type kwargs:
|
||||
"""
|
||||
self.basedir = basedir
|
||||
|
||||
def run(self, terms, inject=None, **kwargs):
|
||||
"""Run the main application.
|
||||
|
||||
:type terms: ``str``
|
||||
:type inject: ``str``
|
||||
:type kwargs: ``dict``
|
||||
:returns: ``list``
|
||||
"""
|
||||
terms = utils.listify_lookup_plugin_terms(
|
||||
terms,
|
||||
self.basedir,
|
||||
inject
|
||||
)
|
||||
if isinstance(terms, basestring):
|
||||
terms = [terms]
|
||||
|
||||
return_data = PACKAGE_MAPPING
|
||||
|
||||
for term in terms:
|
||||
return_list = list()
|
||||
try:
|
||||
dfp = DependencyFileProcessor(
|
||||
local_path=_abs_path(str(term))
|
||||
)
|
||||
return_list.extend(dfp.pip['py_package'])
|
||||
return_list.extend(dfp.pip['git_package'])
|
||||
except Exception as exp:
|
||||
raise errors.AnsibleError(
|
||||
'lookup_plugin.py_pkgs(%s) returned "%s" error "%s"' % (
|
||||
term,
|
||||
str(exp),
|
||||
traceback.format_exc()
|
||||
)
|
||||
)
|
||||
|
||||
for item in return_list:
|
||||
map_base_and_remote_packages(item, return_data)
|
||||
else:
|
||||
parse_remote_package_parts(return_data)
|
||||
else:
|
||||
map_role_packages(return_data)
|
||||
map_base_package_details(return_data)
|
||||
# Sort everything within the returned data
|
||||
for key, value in return_data.items():
|
||||
if isinstance(value, (list, set)):
|
||||
return_data[key] = sorted(value)
|
||||
return [return_data]
|
||||
return LookupModuleV1(*args, **kwargs)
|
||||
|
||||
else:
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.plugins.lookup import LookupBase
|
||||
|
||||
class LookupModuleV2(LookupBase):
|
||||
def run(self, terms, variables=None, **kwargs):
|
||||
"""Run the main application.
|
||||
|
||||
:type terms: ``str``
|
||||
:type variables: ``str``
|
||||
:type kwargs: ``dict``
|
||||
:returns: ``list``
|
||||
"""
|
||||
if isinstance(terms, basestring):
|
||||
terms = [terms]
|
||||
|
||||
return_data = PACKAGE_MAPPING
|
||||
|
||||
for term in terms:
|
||||
return_list = list()
|
||||
try:
|
||||
dfp = DependencyFileProcessor(
|
||||
local_path=_abs_path(str(term))
|
||||
)
|
||||
return_list.extend(dfp.pip['py_package'])
|
||||
return_list.extend(dfp.pip['git_package'])
|
||||
except Exception as exp:
|
||||
raise AnsibleError(
|
||||
'lookup_plugin.py_pkgs(%s) returned "%s" error "%s"' % (
|
||||
term,
|
||||
str(exp),
|
||||
traceback.format_exc()
|
||||
)
|
||||
)
|
||||
|
||||
for item in return_list:
|
||||
map_base_and_remote_packages(item, return_data)
|
||||
else:
|
||||
parse_remote_package_parts(return_data)
|
||||
else:
|
||||
map_role_packages(return_data)
|
||||
map_base_package_details(return_data)
|
||||
# Sort everything within the returned data
|
||||
for key, value in return_data.items():
|
||||
if isinstance(value, (list, set)):
|
||||
return_data[key] = sorted(value)
|
||||
return [return_data]
|
||||
return LookupModuleV2(*args, **kwargs)
|
||||
|
||||
# Used for testing and debuging usage: `python plugins/lookups/py_pkgs.py ../`
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
import json
|
||||
print(json.dumps(LookupModule().run(terms=sys.argv[1:]), indent=4))
|
31
meta/main.yml
Normal file
31
meta/main.yml
Normal file
@ -0,0 +1,31 @@
|
||||
---
|
||||
# Copyright 2014, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
galaxy_info:
|
||||
author: rcbops
|
||||
description: Plugin collection
|
||||
company: Rackspace
|
||||
license: Apache2
|
||||
min_ansible_version: 1.6.6
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- trusty
|
||||
categories:
|
||||
- cloud
|
||||
- rabbitmq
|
||||
- development
|
||||
- openstack
|
||||
dependencies: []
|
38
run_tests.sh
Normal file
38
run_tests.sh
Normal file
@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright 2015, Rackspace US, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set -euov
|
||||
|
||||
ROLE_NAME=$(basename $(pwd))
|
||||
FUNCTIONAL_TEST=${FUNCTIONAL_TEST:-true}
|
||||
|
||||
pushd tests
|
||||
ansible-galaxy install \
|
||||
--role-file=ansible-role-requirements.yml \
|
||||
--ignore-errors \
|
||||
--force
|
||||
|
||||
ansible-playbook -i inventory \
|
||||
--syntax-check \
|
||||
--list-tasks \
|
||||
-e "rolename=${ROLE_NAME}" \
|
||||
test.yml
|
||||
|
||||
ansible-lint test.yml
|
||||
|
||||
if ${FUNCTIONAL_TEST}; then
|
||||
ansible-playbook -i inventory -e "rolename=${ROLE_NAME}" test.yml
|
||||
fi
|
||||
popd
|
24
setup.cfg
Normal file
24
setup.cfg
Normal file
@ -0,0 +1,24 @@
|
||||
[metadata]
|
||||
name = openstack-ansible-plugins
|
||||
summary = plugins for OpenStack Ansible
|
||||
description-file =
|
||||
README.rst
|
||||
author = OpenStack
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
home-page = http://www.openstack.org/
|
||||
classifier =
|
||||
Intended Audience :: Developers
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
build-dir = doc/build
|
||||
source-dir = doc/source
|
||||
|
||||
[pbr]
|
||||
warnerrors = True
|
||||
|
||||
[wheel]
|
||||
universal = 1
|
22
setup.py
Normal file
22
setup.py
Normal file
@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr'],
|
||||
pbr=True)
|
87
tox.ini
Normal file
87
tox.ini
Normal file
@ -0,0 +1,87 @@
|
||||
[tox]
|
||||
minversion = 1.6
|
||||
skipsdist = True
|
||||
envlist = docs,pep8,bashate,ansible-syntax,ansible-lint
|
||||
|
||||
[testenv]
|
||||
usedevelop = True
|
||||
install_command = pip install -U {opts} {packages}
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
deps = -r{toxinidir}/dev-requirements.txt
|
||||
commands =
|
||||
/usr/bin/find . -type f -name "*.pyc" -delete
|
||||
ansible-galaxy install \
|
||||
--role-file=ansible-role-requirements.yml \
|
||||
--ignore-errors \
|
||||
--force
|
||||
|
||||
[testenv:docs]
|
||||
commands = python setup.py build_sphinx
|
||||
|
||||
# environment used by the -infra templated docs job
|
||||
[testenv:venv]
|
||||
deps = -r{toxinidir}/dev-requirements.txt
|
||||
commands = {posargs}
|
||||
|
||||
# Run hacking/flake8 check for all python files
|
||||
[testenv:pep8]
|
||||
deps = flake8
|
||||
whitelist_externals = bash
|
||||
commands =
|
||||
bash -c "grep -Irl \
|
||||
-e '!/usr/bin/env python' \
|
||||
-e '!/bin/python' \
|
||||
-e '!/usr/bin/python' \
|
||||
--exclude-dir '.*' \
|
||||
--exclude-dir 'doc' \
|
||||
--exclude-dir '*.egg' \
|
||||
--exclude-dir '*.egg-info' \
|
||||
--exclude 'tox.ini' \
|
||||
--exclude '*.sh' \
|
||||
{toxinidir} | xargs flake8 --verbose"
|
||||
|
||||
[flake8]
|
||||
# Ignores the following rules due to how ansible modules work in general
|
||||
# F403 'from ansible.module_utils.basic import *' used; unable to detect undefined names
|
||||
# H303 No wildcard (*) import.
|
||||
ignore=F403,H303
|
||||
|
||||
# Run bashate check for all bash scripts
|
||||
# Ignores the following rules:
|
||||
# E003: Indent not multiple of 4 (we prefer to use multiples of 2)
|
||||
[testenv:bashate]
|
||||
deps = bashate
|
||||
whitelist_externals = bash
|
||||
commands =
|
||||
bash -c "grep -Irl \
|
||||
-e '!/usr/bin/env bash' \
|
||||
-e '!/bin/bash' \
|
||||
-e '!/bin/sh' \
|
||||
--exclude-dir '.*' \
|
||||
--exclude-dir '*.egg' \
|
||||
--exclude-dir '*.egg-info' \
|
||||
--exclude 'tox.ini' \
|
||||
{toxinidir} | xargs bashate --verbose --ignore=E003"
|
||||
|
||||
[testenv:ansible-syntax]
|
||||
changedir = tests
|
||||
commands =
|
||||
ansible-galaxy install \
|
||||
--role-file=ansible-role-requirements.yml \
|
||||
--ignore-errors \
|
||||
--force
|
||||
ansible-playbook -i inventory \
|
||||
--syntax-check \
|
||||
--list-tasks \
|
||||
-e "rolename={toxinidir}" \
|
||||
test.yml
|
||||
|
||||
[testenv:ansible-lint]
|
||||
changedir = tests
|
||||
commands =
|
||||
ansible-galaxy install \
|
||||
--role-file=ansible-role-requirements.yml \
|
||||
--ignore-errors \
|
||||
--force
|
||||
ansible-lint test.yml
|
||||
|
Loading…
x
Reference in New Issue
Block a user