Wednesday, June 10, 2015

maven-scm-plugin pom.xml sample for svn checkout/checkin - and why the error "svn: [dir] is not a working copy"

Below is a snippet of my pom.xml that performs the following 3 steps when invoked by "mvn generate-sources" -
1. maven-scm-plugin to do a svn checkout into the target/checkout directory
2. exec-maven-plugin to use curl to download a jar file into the target/checkout directory
3. maven-scm-plugin to do a svn add & checkin from the target/checkout directory
 ------------------------------------------------------
  <scm>
    <connection>scm:svn:http://your_svn_url</connection>
    <developerConnection>scm:svn:http://your_svn_url</developerConnection>
    <url>scm:svn:http://your_svn_url</url>
  </scm>

  <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-scm-plugin</artifactId>
        <version>1.9</version>
        <configuration>
          <username>...</username>
          <password>...</password>
          <basedir>target</basedir>
          <workingDirectory>target/checkout</workingDirectory>
        </configuration>
        <executions>
         <execution>
            <id>perform-checkout</id>
                <phase>initialize</phase>
            <goals>
                <goal>checkout</goal>
            </goals>
           <configuration>
          <workingDirectory>target/checkout</workingDirectory>
          <checkoutDirectory>target/checkout</checkoutDirectory>
           </configuration>
         </execution>
          <execution>
            <id>perform-checkin</id>
                <phase>generate-sources</phase>
            <goals>
                <goal>add</goal>
                <goal>checkin</goal>
            </goals>
            <configuration>
                <workingDirectory>target/checkout</workingDirectory>
                <basedir>target</basedir>
                <includes>*</includes>
                <message>test</message>
            </configuration>
         </execution>
        </executions>
      </plugin>

    <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.2.1</version>
    <executions>
        <execution>
           <id>id1</id>
           <phase>initialize</phase>
           <goals>
                <goal>exec</goal>
            </goals>
           <configuration>
                <executable>curl</executable>
                <workingDirectory>target/checkout</workingDirectory>
                <arguments>
                        <argument>-O</argument>
                        <argument>https://repo.abc.com/abc.jar</argument>
                </arguments>
           </configuration>
        </execution>
    </executions>
</plugin>
</plugins>
--------------------------------------------------
A few things to note here -
1. As you can see, the "initialize" phase will execute the scm:checkout and exec:exec, then the "generate-sources" phase will  execute the scm:add and scm:checkin. You can of course change the phases to whatever fit your need, but phases determines the sequence that your plugin executions will be executed.

2. You should always do a svn checkout first, so the directory you plan to use to checkin contains the svn working copy (in this case, your svn working directory is [your_current_dir]/target/checkout). Otherwise, the scm:checkin would fail with error "svn: [your_current_dir/target/checkout] is not a working copy". You can of course manually checkout the svn working directory first, then use maven to just do the checkin. But with the above executions, you can do both checkout/checkin with maven.

3. You should also check that the .svn directory inside your working directory is properly populated. One thing that could cause the .svn to be empty is that you've set the "includes" configuration to wrong value for scm:checkout - for examples, if you have an "includes=*" in your checkout execution configuration, your ".svn" directory will be empty, and your subsequent scm:checkin would fail.


Monday, March 23, 2015

How to access a siteminder SSO protected url using a java client

I needed a piece of code that can access a siteminder SSO protected url/resource (something like this, http://xyz.abc.net/userList) . And I've heard quite a few times from other people that they wanted the same thing, so I've decided to write a piece of java code. Here are the basic steps to do it -

1. create a cookie store
2. create a httpclient with the cookie store
 CloseableHttpClient httpclient = HttpClients.custom()
                    .setDefaultCookieStore(cookieStore)
                    .build();

3. send a http post to the siteminder protected url
    HttpPost httpPost = new HttpPost("http://xyz.abc.net/userList");
    HttpResponse response = httpclient.execute(httpPost);

4. check that the http response code is 302

5. grab the "Location" (response.getFirstHeader("Location").getValue())
    String location = response.getFirstHeader("Location").getValue();

6. create another httpPost using the Location url you get from step 5

7. set two form fields "USER" and "PASSWORD"
            HttpUriRequest httpPost2 = RequestBuilder
                    .post()
                    .setUri(new URI(location))
                    .addParameter("USER", uid)
                    .addParameter("PASSWORD", pwd).build();

8. response = httpclient.execute(httpPost2)

9. Check the cookie store, there should be a SMSESSION cookie now
   cookies = cookieStore.getCookies();
   for (int i = 0; i < cookies.size(); i++) {
                    System.out.println("- " + cookies.get(i).toString());
   }

10. new create a  httpget with the siteminder protected url and send the httpget with the httpclient, and aha, I could see the proper content in the response and http code 200.

That's it!  Hope this helps.

Wednesday, February 25, 2015

Rundeck "authentication failure...Make sure your resource definitions and credentials are up to date." issue

We have Rundeck connecting to many different application servers to help application teams to do their deployment. Two teams reported the below authentication failure error, one for a linux node, the other for a windows node.

Error:
Authentication failure connecting to node: "<node>". Make sure your resource definitions and credentials are up to date.

For the linux node issue, it's because user was using passwordless ssh to connect to their linux box, but the public key wasn't setup properly. The connection didn't even work from Rundeck server to their box. The problem was resolved after  having user changed the permission of their public key file.

For the windows node issue, it's because user was using a "script" (inline script) job step, instead of the simple "command" job step. It looks like this "script" job step is for unix only, and because user was using it, the netstat from user's windows box showed that Rundeck was trying to connect to the port 22 (default ssh port) in their windows box. Replacing the inline script with a simple command step fixed the issue.

Monday, November 10, 2014

grails.gsp.enable.reload vs disable.auto.recompile in grails 1.3.7

Needed to make some GUI changes to a project that's developed based on grails 1.3.7.

In grails 2.* , if you edit a gsp file, it'll be automatically compiled in a few seconds, and if you refresh, you'd be able to see the changes in your GUI. But for some reason, it didn't seem to work in the my applications that's based on 1.3.7.

After some investigation, I found that I just needed to add the system property "-Dgrails.gsp.enable.reload=true" to the grails run-app. Then update the gsps and do a refresh of the web page. Unlike grails 2.*, there's nothing in the grails run-app log that indicates the changes to the gsp files have been picked up, and this got me stuck for a while, because I was expecting to see some change in the logs (Maybe the missing of the logs was due to my log4j setting).

The "-Ddisable.auto.recompile" property is to control the auto recompile of the java and groovy classes. And it's accomplished in grails via the following -

        ant.groovyc(destdir:classesDirPath,
                    classpathref:classpathId,
                    encoding:"UTF-8",
                    verbose: grailsSettings.verboseCompile,
                    listfiles: grailsSettings.verboseCompile,
                    excludes: '**/package-info.java') {
            src(path:"${grailsSettings.sourceDir}/groovy")
            src(path:"${basedir}/grails-app/domain")
            src(path:"${basedir}/grails-app/utils")
            src(path:"${grailsSettings.sourceDir}/java")
            javac(classpathref:classpathId, debug:"yes")


Tuesday, October 28, 2014

winrm quickconfig -2144108269 0x80338113 error

Got dragged by a user into an online chat today about a winrm issue. Couldn't find a solution online after rounds of "googling". But finally figured out the problem. Post it below, hope it helps if you are in the same situation-

Symptom:
Users have two winrm server hosts, and not able to run "winrm quickconfig" on either. One of the host is new, so they never run winrm there before. But the other host, users were able to run "winrm quickconfig" a while ago and were able to setup the winrm http  & https listeners there. But now although they can still use "winrm e winrm/config/listener -r:<winrm_server_host>" to see the winrm listener setting from a client host, on the winrm server host itself, they see the following error whenever they run "winrm quickconfig" -
============================
C:\Users>winrm quickconfig
WinRM already is set up to receive requests on this machine.
WSManFault
            Message = The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.

Error number:  -2144108269 0x80338113
==============================

Investigation:
Checked "winrm get winrm/config -r:<winrm_server_host>" from a remote machine. And all looked fine.
Checked "winrm e winrm/config/listener -r:<winrm_server_host>" from a remote machine. All looked fine too, except that besides "127.0.0.1" and "::1", there were two other ips for the host (which usually is only one ip). Turned out that this was not an issue, because the host had 2 nic cards.
But the two ip addresses prompted me to ask the user if there's anything unusual or if anything had been changed in the hosts file. And user realized that an entry for "localhost" had been recently  added to the "etc\hosts" file, which had changed the ip for "localhost" from default 127.0.0.1 to the host's ipv4 address.

Resolution:
Had user comment out the localhost "redirect" line in the etc\hosts file. And user was then able to run the "winrm quickconfig" on the winrm server host without error.




Monday, October 27, 2014

rundeck calling xebialabs overthere for winrm connection

Got a meeting "invitation" from a fearful client regarding how to setup a node in rundeck that uses winrm. 
To survive the meeting, I buried my head into the rundeck and xebialabs overthere  code for the whole afternoon :(  and here's what I figured out -

1. rundeck has a winrm plugin (OTWinRMNodeExecutor class). And the plugin looks for the domain mapping in the $RUNDECK_BASE/krb/domain.properties file. This file could be a simple name/value mapping, with name being the shortened domain name (such as "ABC"), and value being the full qualified domain name (such as "ABC.MYCOMPANYNAME.COM")

the winrm plugin also looks for the $RUNDECK_BASE/krb/realm_kdc.properties file. This file again has a name/value mapping, with name being the realm name(usually the upper case domain name), and the value being the pdc host.

2. Based on the domain name of a node, the plugin would set the following 2 system properties -
java.security.krb5.realm - the realm name( the all upper case host domain name)
java.security.krb5.kdc  - the kdc host name (the kdc defined in the realm_kdc.properties file for the particular domain)

3. the plugin would then invoke the xebialabs overthere CifsConnectionBuilder to make the winrm connection, and pass in all bunch of connection options.

4. CifsConnectionBuilder creates a CifsWinRmConnection which then creates an ApacheHttpComponentsHttpClientHttpConnector and passes in the username & password from the plugin options. ApacheHttpComponentsHttpClientHttpConnector, as the name indicates, is the center of the implementation -
- it checks if the username is of a format "username@domainName". If so, kerberos authentication is considered enabled.
- it creates a httpclient, and registers the KERBEROS and SPNEGO authentication schemes
- it sets the credentials with "httpclient.getCredentialsProvider().setCredentials(...)" for the KERBEROS and SPNEGO schemes
- and of course, it sends SOAP requests and receives responses

5. Both of these documents are great read, although not sure how up-to-date the 2nd link is -
- http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html#SetProps
- http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html

Hopefully now I can answer a question or two to survive till the next meeting :(

Thursday, October 16, 2014

keon routes for passwordless ssh

Being in a very small team means I not only have to do development, but also have to handle L3 support issues. This client had 2 issues -

1. why in the DEV environment, the passwordless ssh only works for some hosts, but not the others
2. why the passwordless ssh from our UAT server to their UAT server doesn't work for their functional id


After spending 40 minutes on the conference call, on the chat, and staring at their tiny shared screen with super tiny fonts, finally noticed a typo in their public key. The ssh public key should start with "ssh-rsa", somehow, on the hosts that failed passwordless ssh, their public key started with "sh-rsa". They apparently cut off the first "s" when they copied/pasted. A typo costs 4 people 40 minutes each? And it took 3 days for the ticket they opened to reach me, and one of them is a VP. Wonder how much that costs the firm.

Issue #2 seemed a more fair question, because they didn't know that the keon access routes for their functional id needed to be setup from our host group to their host group first. Showed them how to use keon website to check the host groups and access routes, and they were happy to setup the keon routes themselves.