.. code:: bash #!/bin/bashJENKINS_FOLDER="/shared/jenkins"
DEST_FOLDER="/jenkins-backup"
if [ ! -d "${DEST_FOLDER}" ]; then mkdir -p ${DEST_FOLDER} fi
if [ ! -d "${DEST_FOLDER}/jobs" ]; then mkdir -p ${DEST_FOLDER}/jobs fi
cd ${JENKINS_FOLDER}
pwd
/usr/bin/rsync -v -a --relative --checksum ./*.xml ${DEST_FOLDER}
/usr/bin/rsync -v -a --relative --checksum ./plugins/ ${DEST_FOLDER}/
cd ${JENKINS_FOLDER}/jobs
pwd
find . -maxdepth 2 -name "config.xml" -print0 | /usr/bin/rsync -v -a --relative --checksum --files-from=- --from0 ./ ${DEST_FOLDER}/jobs/
TL;DR
if &term =~ '256color' set t_ut= endif
I am trying to configure Weblogic 10.3.5 with Coherence cluster and two nodes. Everything is running on my local machine. I configured Coherence with WLST and now trying to run both nodes. One node starts and second fails with following exception:
2012-05-10 12:57:37.681/6.175 Oracle Coherence GE 3.6.0.4
I configured servers to run on the same port and enabled Unicast Port Auto Adjust, thinking that Coherence would be smart enough to figure out that port is in use.
But it is not that smart!
But Oracle Metalink has more information about it:
Cause
Coherence 3.6 has since Patch 2 been enhancing its network throughput by using two open UDP ports for each member. For this Coherence picks Unicast listener + 1 for the second port. It has been implemented as a part of COH-3722.
Note that Coherence 3.6.1 and onwards will not have the NullPointerException but a more meaningful error message.
Solution
Separate nodes which run on the same machine by at least two ports.
For example:
When you specify localport=6000 for one node, it will also use port 6001.
The next node you can specify to use localport=6002 will also use 6003
Again the next node you can specify to use localport=6004 will also use 6005
etc
I reconfigured Unicast ports accordingly and the problem is solved!
I’ve got a complaint from one of the developers: the JMS resources, defined in his local instance of Weblogic server, are defined correctly, but he can’t use them from the application he developing – the names are not present in JNDI tree. The problem is that in our environment the same scripts are used to create all Weblogic instances, from local development through production.
Quick look through Weblogic Console JNDI browser confirmed that the JMS resources are not in the tree. I took a look for exceptions or errors in the log files – everything looks ok, the logs are clean. Then I tried to change one of the parameters of the JMS resources and then Weblogic complained with the message saying that JMS resource is not yet started. And in the console it looks like it started ok.
Then I took a look at the Git logs of the recent changes for this type of the Weblogic domain and the only change was an introduction of the JDBCStores for JMS resources. But where are local development machines are pointed to? Is it local Oracle XE database? No, they are pointed to the shared database, used by other environment. This is clearly wrong! I don’t think that JDBCStore implementation is smart enough to recognise that connections are coming from different domains. So I removed JDBCStore usage from development domain and restarted it – voila, JMS resources are now in JNDI tree!
So Weblogic was misconfigured, but was silently ignoring the fact that it can not recognise data from JDBCStore and manifested it through JNDI tree. How weird!
Sometimes I don't want to update everything on my Ubuntu, but I want to keep it up to date with all patched security holes. At the same time I don't like when updates happen automatically, like Ubuntu documentation suggests.
This is the way to apply only security fixes:
1. Copy update sources list to a new file
sudo cp /etc/apt/sources.list /etc/apt/security.sources.list
2. Comment out everything in new file, but leave only security repositories
3. Use following command to apply updates using new file:
sudo apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list
I created an alias in my .bash_aliases for it:
alias updatesecurity='sudo apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list'
Now I simply type updatesecurity
to stay up to date.
I found this solution via ServerFault.com
Install Sun Java:
sudo vi /etc/apt/sources.list
deb http://archive.canonical.com/ubuntu maverick partner
sudo apt-get update
sudo apt-get install sun-java6-jdk
Install Google Chrome unstable:
Add this to /etc/apt/sources.list:
deb http://dl.google.com/linux/deb/ stable non-free main
sudo wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub > google.pub
sudo apt-key add google.pub
sudo apt-get update
sudo apt-get install google-chrome-unstable
I decided to save Hudson configs to Subversion repository just in case something goes wrong with the server later and would need to restore all projects. A bit of googling first, found that there are plugins to to a backup of workplaces and configs. But why would I need to have them? I am on Solaris, so I can create the same by my own hands. Here are my requirements:
Simples! So I created a custom build job within Hudson, pointed it to a Subversion location, where I will store backups and backup shell script, which will gather all Hudson configs and deal with Subversion. I told Hudson that I don't want to create subfolder for a project within its workplace. Then I set to execute script this way: ${WORKPLACE}/hudsonbackup.sh
Here is the script itself:
#!/bin/bash
SRC_FOLDER=${WORKSPACE}
SVN_PARAMS="--config-dir /.subversion/ --non-interactive --trust-server-cert --username f_tc_ci1 --password *"
SVN_COMMAND="/opt/csw/bin/svn"
JOBS=ls -al /.hudson/jobs/|grep -v job1|grep -v job2|awk ' NR>3 {print $9}'
CURR_FOLDER=pwd
echo "Copying Hudson configs"
cp /.hudson/*.xml ${SRC_FOLDER}
echo "Done."
cd ${SRC_FOLDER}
echo "Processing Hudson jobs"
for JOB in ${JOBS}
do
echo "Processing ${JOB}"
if [ ! -e ${JOB} ]
then
echo "New job. Creating folder and adding it to SVN"
mkdir ${JOB}
${SVN_COMMAND} add ${SVN_PARAMS} ${JOB}
SVN_COMMENT="${SVN_COMMENT} Added ${JOB}"
fi
echo "Saving ${JOB}/config.xml" if [ -e ${JOB}/.svn ] then if [ -e ${JOB}/config.xml ] then # config.xml already exist - don't need to add it to subversion cp /.hudson/jobs/${JOB}/config.xml ${JOB}/ else echo "New job - new config.xml" cp /.hudson/jobs/${JOB}/config.xml ${JOB}/ ${SVN_COMMAND} add ${SVN_PARAMS} ${JOB}/config.xml fi else echo "Problem creating and adding folder to Subversion" exit 1 fi done
echo "Job processing is done. Committing..."
${SVN_COMMAND} ci ${SVN_PARAMS} -m "Config saved by Hudson. ${SVN_COMMENT}"
echo "All done."
Note 1: My Subversion is on HTTPS server with the self-signed certificate, so I have to tell Subversion client to trust it explicitly. Its on line 4. On line 5 I constructed the path to Subversion client because Hudson executing shell scripts with quite limited PATH. Note 2: I need to ignore some jobs, so I added them on line 6. Also on this line I have hardcoded Hudson location ("/.hudson") - it needs to be changed if your Hudson somewhere else. After all that I set this job to execute daily. It works!
find . -name "*.ext" -exec grep -i -H -n "texttofind" {} ;
I am installing Postfix on my unmanaged VPS hosting machine and I am doing it first time ever. I followed the installation manual on Ubuntu site and everything looks all right except one tiny thing: I can’t send email using Telnet. Here is the response:
421 4.3.0 collect: Cannot write ./dfnBA9gfVO030273 (bfcommit, uid=0, gid=111): No such file or directory
I googled for an answer, but nothing useful came up so I tried to debug the problem myself. By looking into mail logs (/var/log/mail.log, found by examining /etc/syslog.conf) I found following strange response:
start postfix/master[15758]: fatal: bind 0.0.0.0 port 25: Address already in use
That means only one thing: there is another process listening on this port. Hmm, what could it be? Sendmail, perhaps. I uninstalled it, according to guides I was using, but it was still running.
It’s time to stop it and start postfix!
sudo service sendmail stop sudo service postfix restart
Problem solved!
Moved my blog to a cheaper VPS. Also now this blog is served by Lighttpd instead of Apache. Everything seems to be a bit faster!
This is most frustrating thing that I experience when I encounter Solaris bash prompt: the Home, End and Delete keyboard keys will not work because for some reason Solaris doesn’t understand them as useful keys. But there is the way to enable those keys!
Add following lines to your ~/.bashrc file:
# home key
bind '"e[1~":beginning-of-line'
# del key
bind '"e[3~":delete-char'
# end key
bind '"e[4~":end-of-line'
# pgup key
bind '"e[5~":history-search-forward'
# pgdn key
bind '"e[6~":history-search-backward'
Save and source file. Now keys will work as they should.
I use VMWare to run a bunch of different Linux VMs on my home PC, which is running Vista SP1. Today I encountered strange thing: Linux VM won’t get an IP address from VMWare DHCP. I’ve opened VMWare Virtual Networks management console (as Administrator, of course, or it won’t let you make and save any changes) just to find out that DHCP service is not started. I tried to start it from there, but it failed to do so.
So I went to Vista Services console and tried to start “VMWare DHCP service from there”. Again, failed to start with no explanation, even cryptic one. So I went to Windows Events viewer and there, under Windows Logs-System, I found a lot of messages from VMNetDHCP:
The description for Event ID 2 from source VMnetDHCP cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Can't open C:ProgramDataVMwarevmnetdhcp.conf: Access is denied.
/ The data is invalid
the message resource is present but the message is not found in the string/message table
Though the message says that the message is not found, it has enough information to fix the problem.
So, I’ve taken a look at the security properties for C:ProgramDataVMWare folder and found that SYSTEM account somehow is not in a list of permitted users. The only account, who was permitted to access that folder, was my Windows user account. So I added SYSTEM account with all permissions to this folder, waited when Windows propagated my change to all files and tried to start VMWare DHCP service again. It worked!Then I simply restarted network within Linux VM (sudo /etc/init.d/networking restart) and got back to work.
I created sample JBOSS Seam project in Eclipse and decided to generate entity code from existing database. It worked fine, but when I try to run it, it shows me that Hibernade is not able to map entity beans to the corresponding tables. The error message is like this:
org.hibernate.hql.ast.QuerySyntaxException: <your table> is not mapped
I spent around three hours trying to read all available information on it, but most of it just useless – it looks like people don’t know whats happening, so they recommend bizarre tricks. Like, for example, having empry seam.settings file in each folder, meh.
Finally, I found the solution in Seam Jira: https://jira.jboss.org/jira/browse/JBSEAM-3821
They rocemmend to to force Hibernate to use Seam’s EntityManager by changing two files:
1. components.xml
<persistence:entity-manager-factory name="bookingDatabase" installed="false"/>
<!-- If Seam loads the persistence unit (JBoss 4.x), the EntityManagerFactory will be resolved from #{bookingDatabase}.
On JBoss AS 5, the EntityManagerFactory is retrieved from JNDI (the binding occurs during application deployment). -->
<persistence:managed-persistence-context name="em" auto-create="true"
entity-manager-factory="#{bookingDatabase}" persistence-unit-jndi-name="java:/bookingEntityManagerFactory"/>
2. persistence.xml
<!-- Binds the EntityManagerFactory to JNDI where Seam can look it up.
This is only relevant when the container automatically loads the persistence unit, as is the case in JBoss AS 5. -->
<property name="jboss.entity.manager.factory.jndi.name" value="java:/bookingEntityManagerFactory"/>
Hope that Google will find that page and the proper solution would be much more easily found!
After yesterday’s success in installation of Team Concert, I went through the “Do and Learn” tutorial. The amount of new information and, especially, new views in IDE, is overwhelming! Though the tutorial tries its best to guide through sample project, it would take my time to digest the information. So far, I managed to get through it, but have been stuck on building my HelloWorld application.
The problem is that freshly created build engine, named “HelloWorldEngine”, telling me that Jazz Build Engine is not yet connected to repository. But in another window jbe.exe process is connected and waiting for builds.
I restarted jbe with –verbose parameter to see what it does, but this resulted in only one additional line to be displayed: “2008-10-28 12:16:42 [Jazz build engine] Not using a proxy to reach https://localhost:9443/jazz/” I would expect it to be more verbose that that. Perhaps it needs to tell me something more about which build engine id it tries to use…
And build engine id is exactly my problem here. By default jbe is going to use “default” build engine, not my “HelloWorldEngine”. There is parameter, which is not mentioned in tutorial, but mentioned if you try to run jbe.exe:
-engineId <engine id> (engine id of a build engine defined in the Jazz repository, default is "default")
Once I started jbe with proper parameter – my build was fired off.
And first one was a failure:
com.ibm.team.build.internal.engine.InvalidPropertyValueException: CRRTC3512E: The location "C:javajazzbuildsystembuildengineeclipsefetchedScrumProjectbuild.xml" referenced by property "com.ibm.team.build.ant.buildFile" does not exist.
This is because I named Team Concert project “ScrumProject”, but in Java perspective I followed HelloWorld cue card and named my Java project “HelloWorld”. The build system was puzzled to find it. Team Concert Tutorial says this:
13. In the Build file field, type the following path: fetched/projectname/build.xml, where projectname is the name of your Jazz project.
This is not exactly the truth. This should have the name of my Java project.
So I went and fixed the project name in Ant tab. Requested the build and, voila, it builds!
Now its time to create new project, which will do something bigger, than printing “Hello, World!”
Image via CrunchBase
After rave reviews from my coworkers, who seen the IBM presentation, I decided to give it a try on my local machine. As it usually happens with IBM beta software, the expectancy of it working is close to zero. Let’s give it a try this time, could it be better?
First, I headed to http://jazz.net/ to download software. The server, obviously, managed by IBM marketing department, requires to register. Usual IBM ID is not supported, so I had to give IBM my email once again.
Since I only need a software for basic trial, I decided not to download version, which supports DB2 and Oracle databases. There is no support for MySQL either, but at least they promise to include Derby in setup package. So I pointed that I want Windows installation and found that download size is about 400Mb, and there is no IBM Download Manager (which I hate) to be seen.
So I downloaded and unpacked everything to my hard drive. I am on Windows and Windows users are used to find setup.exe, which does installation of the software for you. There is no such thing here: only install_express_c.html file, which describes step by step installation procedure for both Windows and Linux. Why Linux if I specifically stated that I want Windows installation?
First step says that I have to navigate to Tomcat’s folder and then run server.startup.bat file. There is no such file! The actual file named startup.bat. So I executed it, new command line window popped out and it is full of exceptions. Something about IbmX509 KeyStoreManager is nowhere to be found. Of course it can not be found because I am not running IBM JVM! What kind of assumption is to expect that everybody in the world are on IBM JVM, huh? I had to dig into Tomcat’s server.xml to set it to use Sun’s X509 keymanager (simply open tomcat/conf/server.xml, find IbmX509 and replace it with SunX509).
After quick restart Tomcat is happy and I can open, according to instructions, Jazz setup screen, which is a web application running within Tomcat. The setup has two options: “Fast Path” and “Custom setup”. I’ve chosen Fast Path because I am happy with the defaults, whatever they are. Click on “Fast Path” gets me to “Setup User Registry” screen, which has “A problem occurred while loading User Registry settings.” error message displayed. Hmm… Let’s head back to installation guide:
The default user name and password are case-sensitive:
- The user name is ADMIN.
- The password is ADMIN.
If you configured the LDAP directory Web container, log in with a JazzAdmin user that is defined in your LDAP directory.
What? I have to configure LDAP directory Web container first? This is first time it mentioned in this “guide”! How do I do that? There is no answer.
Ok, Fast Path is broken, so let’s try Custom setup. Click. “Loading Database Connection settings…” No progress here as well. Nothing about such problems in tomcat logs, nothing in server troubleshooting section of installation guide.
So there is no Rational Team Concert for me for now. Would I recommend it for use on my next project?
Update: Decided to dig around filesystem, looking for something useful like logs and other batch files. Found some logs in Tomcat cache folders (tomcat/work/Catalina/localhost) with the ClassNotFound messages. Found jars, containing missing class and added it to CLASSPATH. No help.
Then found mentioned above server.startup.bat and tried to run it. Now it complained about SunX509 KeyManager, so I returned the IbmX509 KeyManager back. Still no help progressing with Jazz Setup…
And then I decided to run repotools.bat – just in case the repository got corrupted or something. Execited it with parameter –createTables, it done something with something.
Tried Jass Setup page again and now it worked! Whoa!
Use Ant!
<delete includeemptydirs="true" >
<fileset dir="${checkout.dir}" defaultexcludes="false" >
<include name="**/.svn/" />
</fileset>
</delete>
There is all so common situation when you have to go to once a week to a website, and you have to login to it and your browsers is not asking you about remembering that password. So you have to remember that password and its getting out of your head and you have to spend some time trying to get it into your head again.
If you are using Firefox you are lucky. There is a way to let your browser to remember the password and even reveal what you have typed correct password.
Here are the steps for a freedom:
Perhaps you want to bookmark this page before the required Firefox restart, so you could easily return to this small guide. Press Ctrl-D to do that.
You don't have to restart Firefox, to get Greasemonkey scripts working. If the script is not working - just refresh the page (Press F5).
Now you all set. Your form data will be remembered and you could verify that you are typing correct password.
You could get many more additional scripts here: http://userscripts.org/
I tried to search all over the web for a Java Faces component, which will do a simple thing: totals and subtotals. And I failed. It seems that whole web is about how much more AJAX you could put everywhere, not how more useful your product could be for average user.
It could be done through a facet tag, which is obvious.
I think about that as competitive advantage for myself. If nobody done it - I should do it and be recognised for it!
Excellent article on branching in Subversion. One thing missing is the branch timeline chart. Something like this:
Where horisontal line is the trunk timeline and angled lines are releases. Based on my experience, that chart often helps developers to understand when branches are created and how to deal with them.
I was reading a lot of Ruby on Rails books in order to start my own project. I found the thing, which really disturbes me: though Ruby is a proper OO language, none of OO desugn features are presented in Rails books.
Proper Object Oriented Design should simplify software development by introducing abstraction from unneccessary logic constraints.
Take, for example, books dealing with making simple online store. They all say that there is no way to add products into shopping cart because products can be in many shopping carts. Because of that you have to add LineItems, which will point to a product description.
This is true when we speak about database design, but completely untrue when we speak about software design. From OO point of view there is no such thing as LiteItem. It has no value to a business logic and it must be dropped from model.
This model look much cleaner and causes no confusion. Additional work must be done on model layer, but this is just to avoid database limitations.