- Notifications
You must be signed in to change notification settings - Fork44
25+ DevOps CLI Tools - Anonymizer, SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles), Hadoop HDFS & Hive tools, Solr/SolrCloud CLI, Nginx stats & HTTP(S) URL watchers for load-balanced web farms, Linux tools etc.
License
HariSekhon/DevOps-Perl-tools
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
DevOps, Linux, SQL, Web, Big Data, NoSQL, templates for various programming languages and Kubernetes. All programs have--help
.
Hari Sekhon
Cloud & Big Data Contractor, United Kingdom
(you're welcome to connect with me on LinkedIn)
Make sure you runmake update
if updating and not justgit pull
as you will often need the latest librarysubmodule and possibly new upstream libraries
All programs and their pre-compiled dependencies can be found ready to run onDockerHub.
List all programs:
docker run harisekhon/perl-tools
Run any given program:
docker run harisekhon/perl-tools<program><args>
installs git, make, pulls the repo and build the dependencies:
curl -L https://git.io/perl-bootstrap| sh
or manually
git clone https://github.com/HariSekhon/DevOps-Perl-tools perl-toolscd perl-toolsmake
Make sure to readDetailed Build Instructionsfurther down for more information.
Optional: Generate self-contained Perl scripts with all dependencies built in to each file for easy distribution
After themake
build has finished, if you want to make self-contained versions of all the perl scripts with alldependencies included for copying around, run:
make fatpacks
The self-contained scripts will be available in thefatpacks/
directory which is also tarred tofatpacks.tar.gz
.
All programs come with a--help
switch which includes a program description and the list of command line options.
Environment variables are supported for convenience and also to hide credentials from being exposed in the process listeg.$PASSWORD
. These are indicated in the--help
descriptions in brackets next to each option and often have morespecific overrides with higher precedence eg.$SOLR_HOST
takes priority over$HOST
.
NOTE: Hadoop HDFS API Tools, Pig => Elasticsearch/Solr, Pig Jython UDFs and authenticated PySpark IPython Notebookhave moved to myDevOps Python Tools repo
anonymize.pl
- anonymizes your configs / logs from files or stdin (for pasting to Apache Jira tickets or mailing- lists)
- anonymizes:
anonymize_custom.conf
- put regex of your Name/Company/Project/Database/Tables to anonymize to<custom>
placeholder tokens indicate what was stripped out (eg.<fqdn>
,<password>
,<custom>
)--ip-prefix
leaves the last IP octect to aid in cluster debugging to still see differentiated nodes communicatingwith each other to compare configs and log communications
diffnet.pl
- simplifies diff output to show only lines added/removed, not moved, from patch files or stdin (pipefrom standarddiff
orgit diff
commands)xml_diff.pl
/hadoop_config_diff.pl
- tool to help find differences between XML / Hadoop configs, can diff XMLfrom HTTP addresses to diff live running clusterstitlecase.pl
- capitalizes the first letter of each input word in files or stdinpdf_to_txt.pl
- converts PDF to text for analytics (see alsoApache PDFBox andpdf2text unix tool)java_show_classpath.pl
- shows Java classpaths, one per line, of currently running Java programsflock.pl
- file locking to prevent running the same program twice at the same time. RHEL 6 now has a native versionof thisuniq_order_preserved.pl
- likeuniq
but you don't have to sort first and it preserves the orderingcolors.pl
- prints ASCII color code matrix of all foreground + background combinations showing the correspondingterminal escape codes to help with tuning your shellmatrix.pl
- prints a cool matrix of vertical scrolling characters using terminal trickswelcome.pl
- cool spinning welcome message greeting your username and showing last login time and user to put inyour shell's.profile
(there is also a python version in myDevOps Python Tools repo)
Written to help clean up docs and SQL scripts.
I don't even bother writing capitalised SQL code any more Ijust run it through this via a vim shortcut(.vimrc).
sqlcase.pl
- capitalizesSQL code in files or stdin:*case.pl
- more specific language support for just about every database and SQL-like language out there plus a fewmore non-SQL languages likeNeo4jCypherandDocker'sDockerfiles:athenacase.pl
-AWS Athena SQLcqlcase.pl
-CassandraCQLcyphercase.pl
-Neo4jCypherdockercase.pl
-Docker (Dockerfiles)drillcase.pl
-Apache Drill SQLhivecase.pl
-HiveHQLimpalacase.pl
-Impala SQLinfluxcase.pl
-InfluxDBInfluxQLmssqlcase.pl
-Microsoft SQL Server SQLmysqlcase.pl
-MySQL SQLn1qlcase.pl
-CouchbaseN1QLoraclecase.pl
/plsqlcase.pl
-Oracle SQLpostgrescase.pl
/pgsqlcase.pl
-PostgreSQL SQLpigcase.pl
-PigLatinprestocase.pl
-Presto SQLredshiftcase..pl
-AWS Redshift SQLsnowflakecase..pl
-Snowflake SQL
watch_url.pl
- watches a given url, outputting status code and optionally selected output- Useful for debugging web farms behind load balancers and seeing the distribution to different servers
- Tip: set a /hostname handler to return which server you're hitting for each request in real-time
- I also use this a ping replacement to google.com to checkinternet networking in environments where everything except HTTP traffic is blocked
watch_nginx_stats.pl
- watches nginx stats via theHttpStubStatusModule
module
solr_cli.pl
-Solr CLI tool for fast and easySolr /SolrCloudadministration. Supports optional environment variables to minimize --switches (can be set permanently insolr/solr-env.sh
). Uses the Solr Cores and Collections APIs, makes Solr administration a lot easier
ambari_freeipa_kerberos_setup.pl
- Automates HadoopAmbari cluster security Kerberossetup ofFreeIPA principals and keytab distribution to the cluster nodeshadoop_hdfs_file_age_out.pl
- prints or removes allHadoop HDFS files in a givendirectory tree older than a specified agehadoop_hdfs_snapshot_age_out.pl
- prints or removesHadoop HDFS snapshots older than agiven age or matching a given regex patternhbase_flush_tables.sh
- flushes all or selectedHBase tables (useful when bulk loadingOpenTSDB with Durability.SKIP_WAL) (there is also a Python version of this in myDevOps Python Tools repo)hive_to_elasticsearch.pl
- bulk indexes structuredHive tables inHadoop toElasticsearch clusters - includes support forKerberos, Hive partitioned tables with selected partitions, selected columns, index creation with configurablesharding, index aliasing and optimizationhive_table_print_null_columns.pl
- findsHive columns with all NULLs (see newer versionsinDevOps Python tools repo forHiveServer2 andImpala)hive_table_count_rows_with_nulls.pl
- counts number of rows containing NULLs in any fieldpentaho_backup.pl
- script to back up the localPentaho BA or DIServeribm_bigsheets_config_git.pl
- revision controlsIBM BigSheetsconfigurations from API to Gitdatameer_config_git.pl
- revision controlsDatameer configurations from API to Githadoop_config_diff.pl
- tool to diff configs betweenHadoop clusters XML from files orlive HTTP config endpoints
Themake
command will initialize my library submodule and usesudo
to install the required system packages and CPANmodules. If you want more control over what is installed you must follow theManual Setup section instead.
The automated build will use 'sudo' to install required Perl CPAN libraries to the system unless running as root or itdetects being inside Perlbrew. If you want to install some of the common Perl libraries such asNet::DNS
andLWP::*
using your OS packages instead of installing from CPAN then follow the Manual Build section below.
Enter the tools directory and run git submodule init and git submodule update to fetch my library repo and then installthe CPAN modules as mentioned further down:
git clone https://github.com/HariSekhon/DevOps-Perl-tools perl-toolscd toolsgit submodule update --init
Then proceed to install the CPAN modules below by hand.
Install the following CPAN modules using the cpan command, usingsudo
if you're not root:
sudo cpan JSON LWP::Simple LWP::UserAgent Term::ReadKey Text::Unidecode Time::HiRes XML::LibXML XML::Validate ...
The full list of CPAN modules is insetup/cpan-requirements.txt
.
You can install the entire list of cpan requirements like so:
sudo cpan$(sed's/#.*//'< setup/cpan-requirements.txt)
You're now ready to use these programs.
Download the Tools and Lib git repos as zip files:
https://github.com/HariSekhon/DevOps-Perl-tools/archive/master.zip
https://github.com/HariSekhon/lib/archive/master.zip
Unzip both and move Lib to thelib
folder under Tools.
unzip devops-perl-tools-master.zipunzip lib-master.zipmv -v devops-perl-tools-master perl-toolsmv -v lib-master libmv -vf lib perl-tools/
Proceed to install CPAN modules for whichever programs you want to use using your standard procedure - usually aninternal mirror or proxy server to CPAN, or rpms / debs (some libraries are packaged by Linux distributions).
All CPAN modules are listed in thesetup/cpan-requirements.txt
file.
Strict validations include host/domain/FQDNs using TLDs which are populated from the official IANA list. This is donevia myLib submodule - see there for details on configuring to permit custom TLDslike.local
,.intranet
,.vm
,.cloud
etc. (all already included in there because they're common across companiesinternal environments).
Runmake update
. This will git pull and then git submodule update which is necessary to pick up corresponding libraryupdates.
If you update often and want to just quickly git pull + submodule update but skip rebuilding all those dependencies eachtime then runmake update-no-recompile
(will miss new library dependencies - do fullmake update
if you encounterissues).
Continuous Integration is run on this repo with tests for successand failure scenarios:
- unit tests for the custom supportingperl library
- integration tests of the top level programs using the libraries for things like option parsing
- functional tests for the top level programs usinglocal test data andDocker containers
To trigger all tests run:
maketest
which will start with the underlying libraries, then move on to top level integration tests and functional tests usingdocker containers if docker is available.
Patches, improvements and even general feedback are welcome in the form of GitHub pull requests and issue tickets.
You might also be interested in the following really nice Jupyter notebook for HDFS space analysis created by anotherHortonworks guy Jonas Straub:
https://github.com/mr-jstraub/HDFSQuota/blob/master/HDFSQuota.ipynb
The rest of my original source repos arehere.
Pre-built Docker images are available on myDockerHub.
About
25+ DevOps CLI Tools - Anonymizer, SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles), Hadoop HDFS & Hive tools, Solr/SolrCloud CLI, Nginx stats & HTTP(S) URL watchers for load-balanced web farms, Linux tools etc.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.