Friday, December 27, 2019

Web testing with Selenium/ Geb / WebDriver

Spock

@Stepwise forces tests to run sequentially in declared order (even if parallel spec runner). If a method fails, remaining tests will be skipped


To check if tests shoudl be run or skipped you can use annotations @Requires or @IgnoreIf
However theese only work with System variables, (not instance variables)

For instance variable assesment you can use junit Assume.assumeTrue function. See https://stackoverflow.com/questions/33818907/programmatically-skip-a-test-in-spock
It throws an exception that spock is already catching and thus ends up ignoring the test case
 

Geb / Selenium

just some notes on WebTesting

WebDriver comes with 4 finders https://www.w3.org/TR/webdriver1/#element-retrieval

The Find ElementFind ElementsFind Element From Element, and Find Elements From Element commands allow lookup of individual elements and collections of elements.

The parameter passed to the find method specifies the Locator strategy

heres some examples using the webdriver directly

// By class
List elementsList = driver.findElements(By.className("myClass"))
// by xpath
WebElement element = driver.findElement(By.xpath("myClass")) // xpath expressions are easily copyable from Chrome deveolper tab
//By CSS (using GEb waitFor here )
waitFor (10) { driver.findElement(By.cssSelector("table.table")).displayed }



Note with cssSelctor you can use :nth-child to subselect a specific element (This is similar to :eq operator in jquery however the count is 0 based in jquery but 1 based in nth-child). Note also that this might not be obvious. I was trying to select what appeared to be the third table but if you use the copy selector option from chrome it was (apparently) the 7th child
e.g
$("body > table:nth-child(3)")
in jquery can also use $("body > table:eq(3)") // Note zero based index

For information on Geb goto The Book of Geb. It has some nice groovy syntax to remove some boiler plate logic.

Testing in Chrome

You can test your selectors in Chrome
  • Press F12 to open up Chrome DevTools.
  • Switch to Console panel.
  • Type in XPath like $x(“.//*[@id=’id’]”) to evaluate and validate.
  • Type in CSS selectors like $$(“#id”) to evaluate and validate.
  • Check results returned from console execution.

Page Object Pattern

As well as writing tests directly you can abstract pages away with the page Object Pattern.


import geb.Page

class WwwSchoolsAjaxExamplePage extends Page{

  private  static final DEFAULT_DIV_CONTENT = 'Let AJAX change this text'

  static url = "ajax/ajax_example.asp"

  static at = {
      title=="AJAX Example"
  }

  static content = {
      theButton { $(".example button") }
      theResultDiv { $("#myDiv") }
  }

  def makeRequest() {
      theButton.click()
      waitFor { theResultDiv.text()!=DEFAULT_DIV_CONTENT }
  }

}

Monday, October 14, 2019

Global gradle settings

If you have a local repo for all dependwencies you can include thi sin a global init.gradle file so that all builds pick it up (akin to the settings.xml in Maven)

init.gradle is stored in your $GRADLE_HOME (e.g. $HOME/.gradle)

e.g. here is an example

allprojects {
repositories {
        mavenLocal()
maven { url "http://myartifactory/repo1-cache"}
maven { url "http://myartifactory/libs-release"}
maven { url "http://myartifactory/libs-snapshot"}
}
buildscript {
repositories {
mavenLocal()
maven { url "http://myartifactory/repo1-cache"}
maven { url "http://myartifactory/libs-release"}
maven { url "http://myartifactory/libs-snapshot"}
}
}
}

If you need this updated in intelliJ , then you should run gradle warpper, and update the gradle/wrapper/gradle.properties, fixing the distributionUrl value 

Thursday, August 01, 2019

zip file handling groovy/ java

This is just a simple prog to correlate dates and times of files in a zip file. We used it to verify delivery times externally sourced data. Its got some extra stuff in there since we only wanted a subset of the files.

import groovy.transform.Field

import java.text.DateFormat
import java.text.SimpleDateFormat
import java.util.regex.Matcher
import java.util.regex.Pattern
import java.util.zip.ZipEntry
                                   
                                    
import java.util.zip.ZipFile
import java.util.zip.ZipOutputStream


class FileEntry{
   String filename;
   String dir;
   Date date;
   String day;
   String time;
   DateFormat dayDf = new SimpleDateFormat("yyyy-MM-dd");
   DateFormat timeDf = new SimpleDateFormat("HH:mm:ss");
   
   FileEntry(String filename, Date d){
       this(filename, null, d);
   }
   FileEntry(String filename, String dir, Date d){
       this.filename = filename;
       this.date=d;
       day = dayDf.format(d);
       time = timeDf.format(d);
   }
   
   public String toString(){
       return filename+","+day+","+time+"\n";
   }
}


String zipFileName = "c:/temp/dropdirsAug2018.zip"
if(args.length>0)
    zipFileName=args[0]
ZipFile inputZip
try{
    inputZip = new ZipFile(zipFileName)
}catch(Exception e){
    println "Cannot find expected file $zipFileName"
    return
}

/*
 * Table listing files to check 
 * inName is the path and name of input files (including asterisk wildcard)
 * outName is the output name. 
 */
def zipFilesToMove = [
        [inName:"/path/*filename*.csv", outName:"<#>.filename.csv"],
        //[inName:"/path/otherfile.xml", outName:"otherfile.xml", outDir:"/notUsedHere"]

]


def list = doUnzip(inputZip, zipFilesToMove)

//Clean up old folders
//println list
writeToFile(list)
println "Done"

void writeToFile(List fileEntry){
println "Writing to file"
    File f = new File("output.csv");
    f.text = "";
    int x=0;
    for(FileEntry line:fileEntry){
        f << line.toString();
        println line
        x++;
    }
    println "Written $x lines"
}

// Do unzip for specified files
public List doUnzip(ZipFile inputZip, List zipFilesToCheck){
    List ret = new ArrayList();


    //Loop through bigZip to find files we want.
    // if we find a file we unzip it based ondate to different folder
    Enumeration e = inputZip.entries()
    while(e.hasMoreElements()) {
        ZipEntry zipEntry = e.nextElement()
        if (zipEntry.isDirectory()) {
            continue;
        }
        FileEntry fileEntry = checkIfValidZipEntry(zipEntry, zipFilesToCheck)
        if(fileEntry)
            ret << fileEntry;
    }
    return ret
}

public FileEntry checkIfValidZipEntry(ZipEntry zipEntry, List zipFilesToCheck){
    String zipEntryName = zipEntry.getName();
    long timestamp = zipEntry.getTime()
    for(Map zipFileToCheck: zipFilesToCheck){
        String regex = getRegex(zipFileToCheck.inName)
        // Note String matches, checks if whole string matches, not substring. ? Must use this instead
        Pattern p = Pattern.compile(regex);
        Matcher m = p.matcher(zipEntryName);
        boolean match = m.find()
        if(match){
            Date d = new Date();
            d.setTime(zipEntry.getTime())
            return new FileEntry(zipFileToCheck.inName, d);
        }        
    }
    //println "No match for $zipEntryName.. Ignoring"
    return null
}

String getRegex(String filename){
    return filename.replaceAll("\\*", ".*").replaceAll("\\/","\\\\/")
}

Monday, June 24, 2019

SaltStack

Some notes

Get formulas from https://github.com/saltstack-formulas/ e.g. for HAProxy
 https://github.com/saltstack-formulas/haproxy-formula

Steps below is condensed (and includes more explicit commands than on this page https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html)

To use curl 
curl -LOk https://github.com/saltstack-formulas/haproxy-formula.git
Unzip Add to salt Master file_roots sudo vi /etc/salt/master
file_roots:
  base:
    - /srv/salt
    - /srv/formulas/apache-formula
Restart Salt Master
sudo pkill salt-master 
sudo salt-master -d 

Run state e.g. for haproxy
sudo salt '*' state.apply haproxy.install

Salt States

Backup folder
backup_folder:
  file.copy:
    - name: {{ folder-name }}.bak.{{ None|strftime("%Y-%m-%d_%H_%M") }}
    - source: {{folder-name}}
    - user: {{ user}}
    - group: {{group}}


Set variable to latest filename
{%- set fileName = salt['file.find']('/var/publish/',type='f', name='PackageToPublish-1.*.tar.gz')  | last -%}

Include Another state
Say we have a service stop state in a file called service/stop.sls
stop_service:
  service.dead:
    - nameservice

If we are in the same folder and want to include it, we include it usings its filename (with any directories in front)
However in the require step we just include the id name (can also have idnetifies like service: pkg:  etc)

include:
  - service.stop

upgrade_archive_unpacked:
  archive.extracted:
    - name: {{ pillar['root_dir'] }}/{{ pillar['service']['upgrade'] }}
    - source:  {{ pillar['service']['source'] }}
    - source_hash: {{ pillar['service']['source_hash'] }}
    - user: {{ pillar['user'] }}
    - group: {{ pillar['group'] }}
    - overwriteTrue
    - enforce_ownership_on: {{ pillar['root_dir'] }}
    - enforce_toplevelFalse
    - options"--strip-components=1"
    - require
      - stop_service


Rollback to backup
{%- set rollbackFolderTuple = salt['file.find']('/pathToSearch/', type='d', name='dirname.bak.*', print='mtime,name')| sort  | last -%}
{%- set rollbackFolder = rollbackFolderTuple[1] %}
rollback_folder:
  file.rename:
    - name: {{ rollbackFolder }}
    - source: {{folder-name}}
    - user: {{ user}}
    - group: {{group}}

Tuesday, April 09, 2019

Postgres

Cheat sheets

https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546

https://gist.github.com/apolloclark/ea5466d5929e63043dcf

Number of active connections by DB and IP
select count(*),datname, client_addr from pg_stat_activity group by datname, client_addr;

Note can also use ps to show number of active processes
 ps -ef |grep -i postgres

To just show all connections to a particular DB
select substring(query,0,90),state,query_start,pid from pg_stat_activity where datname='DBNAME' order by query_start;

Locked queries
Can use something like this to show locks

SELECT blocked_locks.pid     AS blocked_pid,
         blocking_locks.pid     AS blocking_pid,
         blocking_activity.state AS blocking_state,
         blocking_activity.query_start AS blocking_query_start,
         substring(blocked_activity.query,0,60)    AS blocked_statement,
         substring(blocking_activity.query,0,60)   AS current_statement_in_blocking_process,
         blocked_activity.datname AS db
    FROM  pg_catalog.pg_locks         blocked_locks
     JOIN pg_catalog.pg_stat_activity blocked_activity  ON blocked_activity.pid = blocked_locks.pid
     JOIN pg_catalog.pg_locks         blocking_locks
         ON blocking_locks.locktype = blocked_locks.locktype
         AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
         AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
         AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
         AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
         AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
         AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
         AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
         AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
         AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
         AND blocking_locks.pid != blocked_locks.pid
     JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
    WHERE NOT blocked_locks.GRANTED ORDER BY blocking_activity.query_start;

Permissions and pg_hba

Permissions are controlled by the pg_hba file

To find out where this is run

show hba_file;

Normally somewhere like /var/lib/pgsql/10/data/pg_hba.conf

By default you will not be able to psql -U postgres unless you are the postgres user (in linux)  (You will get fatal Peer authentication failed.. See  https://gist.github.com/AtulKsol/4470d377b448e56468baef85af7fd614).

I have seen this setup to allow all local users get acess


# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# Default is host    all             all             127.0.0.1/32            ident


Data

Postgres stores in files in data folder.
The default is something like this
/var/lib/postgresql/9.5/main
Also seen /var/lib/pgsql/9.6/data/,  Default dir is /usr/local/pgsql/data
Run this to find out actual location

SHOW data_directory

New Setup

After installing

e.g. for postgres 10.

# Init DB (using default data folder)
sudo service postgresql-10 initdb
sudo service postgresql-10 start
exit
sudo su - postgres
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/10/data/pg_hba.conf.orig
# See below for allowing local users to login as postgres user
vi /var/lib/pgsql/10/data/pg_hba.conf
sudo service postgresql-10 reload
exit
sudo su -
psql -U postgres
    > CREATE ROLE NOSUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
    > ALTER USER WITH PASSWORD '';
    > \q

Backup/ Restore

SQL Dump

The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:

pg_dump dbname > dumpfile

Can be more specific. e.g. -n to backup individucal schema, -t for individual table
pg_dump -Fc %DATABASE% -f %DUMP_FILE_PATH%

As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration.

Restore

Non-text file dumps are restored using the pg_restore utility. Text files can use psql
psql dbname < dumpfile

where dumpfile is the file output by the pg_dump command. The database dbname will not be created by this command, so you must create it yourself from template0 before executing psql (e.g., with createdb -T template0 dbname)

By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs:
psql --set ON_ERROR_STOP=on dbname < dumpfile
Either way, you will only have a partially restored database. 
pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the pg_dumpall program is provided. pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is:
pg_dumpall > dumpfile
The resulting dump can be restored with psql:

Barman

Barman is the postgres Backup and ARchive Manager. See http://docs.pgbarman.org/release/2.12/
It will backup the databases configured on a DB server.

Otions.. Streaming (prefered) vs rsync

This is a simple script we used to keep a certain number of fodler (backups). Note it will only delete 1 folder ( the oldest) at a time. So if you have way more folders, you may need to manually delete them first.

 #!/bin/bash

dir="<barmanDir>/Local/base/"
min_dirs=3 // If there are more dirs than this we will delete the oldest

[[ $(find "$dir" -maxdepth 1 -type d | wc -l) -ge $min_dirs ]] &&
IFS= read -r -d $'\0' line < <(find "$dir" -maxdepth 1 -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
file="${line#* }"
ls -lLd "$file"
rm -rf "$file"

Starting stopping

service postgresql-9.6 initdb
chkconfig postgresql-9.6 on
service postgresql-9.6 start