This is an old revision of the document!
Attending: Nancy Chanover (NMSU), Bill Ketzeback (APO), Misty Bentz (GSU), Aleksandr Mosenkov (BYU), Anne Verbiscer (UVa), Michael Hayden (OU), Kevin Schlaufman (JHU), Joanne Hughes (Seattle U), Moire Prescott (NMSU), Chip Kobulnicky (UWy), Ben Williams (UW)
The detailed site report is included below, followed by additional information discussed during today's meeting.
3.5-m Telescope and Instruments Highlights, 8/06/25 – 09/02/25
1) Overview
The month of August experienced a mix of late afternoon monsoon storms that either delayed opening or affected many of the A half observing programs. We had two visiting instrument teams from U. Virginia this month. There was a failure of one of the main site network subnets that prevented observing and took down site phones as well as communications via TUI. Email was able to be received and sent from off site but not on site. We tracked the problem to a failure of a managed network switch and we were able to replace it with a spare. There was some difficulty loading the management given the switches were different models. The root cause is still not well understood as it looked like a cascading failure including the site DHCP server that also resulted in loss of communications with the site DNS servers. The problem took a little over 24 hours to resolve.
The hiring committees for the Telescope Engineer and Night Operation positions are in the process of reviewing candidates. There were a large number of applications for each opening.
2) Operations
3.5m Telescope: Telescope is working as expected. Seasonal motion errors have been infrequent. A full remap of the tertiary rotation was redone and this seems to have corrected the reported position errors from last month's report.
0.5m Telescope: Telescope is working as expected. ACP Library support errors are still occurring even with the dcam-spare camera swap but with much lower frequency. Dcam-spare had a loss of CCD chamber integrity and was forming ice on the detector while cold. The camera was serviced twice but now appears to have water spots on or near the detector surface. Cary Smith will attempt to clean but it is not without risk to the bonding wires of the detector. University of Virginia tested another surplus Apogee camera during their coordinated observing run with DSSI on the 3.5m. The off-axis guider has not been very reliable, requiring power cycles.
KOSMOS: System is cooled and stable. Increased dark current for long exposures has been confirmed. We attempted a vacuum servicing of the cryostat but without much luck. We anticipate having to open the vacuum vessel in the clean room sometime during Quarter 4 to investigate possible thermal shorts or opens. kcamera-ICC was backed up over shutdown.
ARCTIC: The diffuser rotation mechanism is still unreliable even after a full servicing. Troubleshooting is continuing. The mechanism that moves the diffuser in and out of the optical path is still functional in the meantime. The rest of the instrument is cooled.
Agile: The camera is non-operational, the thermoelectric cooler failed again. The camera is warm and we do not plan further repair work with this camera. We are planning on decommissioning the camera. The agile instrument rotator is still not performing nominally and we are troubleshooting it further to prepare for SoonerCam.
ARCES: The CCD reservoir for the cooling system was brought back to temperature. IOL levels are quite good but slowly worsening. We hope that it will stabilize and reverse direction with cooler ambient temperatures as we have seen in previous fall time frames. Work continues on a replacement and modern ICC, so far testing is going well. A commissioning report is being worked on.
DIS: System is cooled but in an unknown state for science. Decommissioning plans have begun.
NICFPS: System is cooled and usable. The ICS software has needed multiple restarts over the past month. The cause is unclear at this time.
TripleSpec: System is cooled and usable.
APOLLO: The instrument is usable for laser ranging.
Tail end of monsoon season - lots of A halves affected in August. Failure of main site networks subnet switches, took out site phones and 1075 network - all common communication network that handles site communications, computers, non-telescope-specific instruments and computers. Switch was replaced. Took about 24 hrs to get back up - it was an old switch so had no direct spare so replaced with managed switch - needs to be managed because passes through some other specific configurations for wifi security and other security thing for other switches. Not a quick swap but Tracey and Shane jumped on it right away, apologize for one night of lost data. No plans to change the network configuration as it would be cost prohibitive to reengineer the entire network. Doesn't makes sense to have switches loaded on shelf loaded with configurations for specific switches because this one lasted 20 yrs to doesn't make sense to have backups. Having staff better trained on how to load management onto switches is important. Core security switches coming into building. Most are > 48 port switches so pretty expensive. Trying to source replacement for switch we used. Major part of recovery was to get network back up and running worked until 10, started at 6 am, working by 4 pm.
Interviews going, soon will have some site interviews, hoping to make offer perhaps even by next meeting
0.5m DIS Dcam can be used as spare once decommissioning is further along
Need get with UVa if spare camera can be used by broader community
KOSMOS vacuum servicing didn't resolve issue - have to open up the cryostat to see if we can figure out what is causing decrease hold time or increase in dark current - currently planning to do this during Q4 - science schedule indicates that may not be possible until early December unless critical and then take out of service. Chip - recent user suggests that dark current is so high that instrument is unusable for faint targets - doubling and tripling exposure time - need to check with that user - balancing class visits in Q4 with servicing needs
ARCES had poor IOL - contamination in light path of CCD - warmed up half of the cryostat while keeping shield cold so that water vapor would migrate to shield (if it is water vapor) - that did improve things significantly but subsequently degraded somewhat - hoping it will stabilize or reverse direction - modern computer for ARCES ICC is underway
NICFPS, TripleSpec as expected.
DIS powered down, APOLLO has been laser ranging and were able to get 5 retroreflectors very recently. But still isn't at peak performance.
New Personnel: Welcome to Tim McQuaid, a third-year graduate student at NMSU, who will be replacing Mark Croom as the emergency fill-in observing specialist. Tim has already begun training so you will likely encounter him soon!
There is some OPEN time remaining in September (last month of Q3). We have received requests for some of it, but not all. To request this time please follow the standard procedure by emailing your request to Ben Williams, Russet McMillan, Amanda Townsend, Nancy Chanover, and your institutional scheduler. Be sure to include the specific slot you are requesting (or specify that any time slot will do) and a short justification.
The Q4 schedule is in the works; we expect to have a first draft in about a week.
September 8-17 (next week!) is the only open time remaining in the ARCSAT Q3 schedule.
Close to having something - all requested pieces and updates were implemented. Need to coordinate with SDSS before - users should start using it! mainapo.nmsu.edu - more eyes on the better.
4 classes coming in Q4, 1 moved to Q1 (?) ASPEN team meeting Open House on 10/11 ARC BoG meeting on November 4 - will request science highlights next month Jan AAS meeting - booth, other? - Oct 6 is abstract deadline, 9/25 is Splint
Open action items from previous meetings:
New action items from this meeting:
None.
The next meeting will be on October 7, 2025 at 10:30 MDT.