Firefox 에서 banner_ class 사용시 주의사항
firefox 를 사용하는데, class name 을banner_image
로 설정했다. 그랬더니 firefox 에서 보이지 않게 됐다. 그래서 다른 class name (banner 가 들어가지 않은) 으로 변경하였다.- 사용한 firefox version : 7.1.0 64 bit
See Also
- 콘텐츠 차단 내용에 있지 않다.
banner_image
로 설정했다. 그랬더니 firefox 에서 보이지 않게 됐다. 그래서 다른 class name (banner 가 들어가지 않은) 으로 변경하였다.// 줄이기전 header HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 15 Content-Type: application/json accept-encoding: gzip Access-Control-Allow-Origin: * X-GA-Service: collect Access-Control-Allow-Methods: GET, POST, OPTIONS Access-Control-Allow-Headers: Authorization, X-Requested-With, Content-Type, Content-Encoding
// 줄인 후 header HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 15 Content-Type: application/json
Enterprise architecture
를 만들기 위해 기술적인 부분이 정의돼야 하는데, 그것과 관련해서 참고해서 사용하는 것이 Technical Reference Model
이다. 즉 Technical Reference Model
에서는 기능에 대해 구체적인 동작? 을 적는다고 보면 된다. 이러이러한 기능이 가능해야 한다. 등 이 Model 을 좀 더 구체적으로 구현하기 위해서 현재의 기술들 중에 적합한 것을 선정할 것이고, 그 기술들에 대한 스펙 같은 것(profile) 을 모아 놓은 것이 Standard Profile 이 된다. Enterprise Model
, System Model
, Technology Model
이 함께 고민된다고 보면 될 듯 싶다. CREATE USER 'reader'@'222.111.99.55' IDENTIFIED BY 'mypassword'; DROP USER 'reader'@'222.111.99.55'; GRANT SELECT, EXECUTE ON *.* TO 'reader'@'222.111.99.55' WITH GRANT OPTION; -- 모든 권한 GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'all-round-user'@'%' WITH GRANT OPTION; -- 권한확인 SHOW GRANTS FOR 'bob'@'localhost'; SHOW GRANTS FOR CURRENT_USER; -- 특정 db 에 대한 모든 권한을 주는 법 GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, CREATE VIEW, EVENT, TRIGGER, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EXECUTE ON mydatabase.* TO 'siteuser'@'%'; FLUSH PRIVILEGES;
RENAME USER 'reader'@'222.111.99.55' TO 'reader'@'222.111.99.%';
c:\> nslookup -debug cocktailfunding.com
이 2개의 record 는 지우면 안된다. 그리고 다른 NS, SOA record 를 생성해도 안된다고 되어 있다.
60초 ~ 900초
수준으로 변경하면 된다.@ControllerAdvice
@ModelAttribute
@ModelAttribute
는 @RequestMapping 이전에 실행된다.@EnableAsync
anootation 을 사용해서, the Executor instance 를 customize 할 수 있다. @Configuration @EnableAsync(proxyTargetClass = true) @EnableScheduling public class AsyncConfiguration extends AsyncConfigurerSupport { // `@Autowired` 에 의해 `threadPoolTaskExecutor` 에 bind 된다.? @Autowired private ThreadPoolTaskExecutor threadPoolTaskExecutor; ... // 그래서 `@Resource(name = "secondThreadRun")` 에 의해서 // `secondThreadRun` 에 bind 된다. 이때 `secondThreadRun` 은 // `ApplicationContextTestResourceNameType application context` 에 // define 되어야 한다. @Resource(name = "secondThread") private ThreadPoolTaskExecutor secondThread; @Override public Executor getAsyncExecutor() { return threadPoolTaskExecutor; } ... @Bean(destroyMethod = "shutdown") public ThreadPoolTaskExecutor threadPoolTaskExecutor() { // `@Bean` 을 creation 을 하고, `threadPoolTaskExecutor` 이름으로 // Bean 이 하나 생성되고, // 이것이 `@Autowired` 에 의해 `threadPoolTaskExecutor` 에 bind 된다.? ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); executor.setCorePoolSize(10); executor.setMaxPoolSize(10); executor.setQueueCapacity(50); executor.setThreadNamePrefix("mythread-"); executor.initialize(); return executor; } /** * `createSecondThreadPool` 의 bean 은 `secondThread` 라는 이름으로 생성된다. * * @return */ @Bean(name = "secondThread") @Qualifier public Executor createSecondThreadPool() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); executor.setCorePoolSize(TASK_BATCH_CORE_POOL_SIZE); executor.setMaxPoolSize(TASK_BATCH_MAX_POOL_SIZE); executor.setQueueCapacity(TASK_BATCH_QUEUE_CAPACITY); executor.setBeanName(EXECUTOR_BATCH_BEAN_NAME); executor.initialize(); return executor; } }
@Service public class NhBatchThreadRun { ... @Autowired MyService1 myService1; @Autowired MyService2 myService2; /** * `@Async("secondThread")` 는 `secondThread` 라는 * name 의 bean 을 이용한다. * 이 bean 은 configuration class 에 정의되어 있어야 한다. * (`AsyncConfiguration.createSecondThreadPool` 를 확인하자.) * * `secondThread` executor 가 `secondThreadRun` 를 실행하게 된다. * * @ref: [How To Do @Async in Spring | Baeldung](https://www.baeldung.com/spring-async) * * @param str */ @Async("secondThread") public void secondThreadRun(String str) { logger.debug("THREAD START!!!"); // batch Start!! try { while(!Thread.interrupted()) { logger.debug("THREAD ING!!!! "+new Date().toString()); myService1.send(); myService2.insertIntoDB(); ... Thread.sleep(1000 * 5); } } catch (InterruptedException e) { e.printStackTrace(); } logger.debug("THREAD END!!!!"); } }
iptables -S
-S
는 specification 을 뜻한다.iptables -S
를 하면 어떤 식으로 iptables 이 설정됐는지 보인다. $ iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -A INPUT -s 127.0.0.1/32 -p tcp -m tcp --dport 22 -j DROP -A INPUT -s 127.0.0.1/32 -p tcp -m tcp --dport 21 -j DROP -A INPUT -i eth1 -j ACCEPT -A INPUT -i tun0 -j ACCEPT -A INPUT -s 192.168.21.1/32 -j ACCEPT -A INPUT -s 127.0.0.1/32 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -s 127.0.0.1/32 -p tcp -m tcp --dport 3306 -j ACCEPT -A INPUT -s 203.133.167.16/24 -j ACCEPT
iptables -L -n --line-numbers
$ iptables -L -n --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 DROP tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:22 2 DROP tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:21 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 5 ACCEPT all -- 203.133.167.16/24 0.0.0.0/0 27 DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 Chain FORWARD (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination
iptables -P INPUT ACCEPT
INPUT
, FORWARD
, OUTPUT
등 의 built-in chain 만이 가능하다. 자세한 것은 ref.1 을 보자. -A INPUT
하면 INPUT chain 에 append 하는 것이다. # INPUT chain 에 다음 rule 을 append 해라 - interface eth1 을 ACCEPT 로 jump 해라.
iptables -A INPUT -i eth1 -j ACCEPT
ACCEPT
, DROP
) 을 정해준다. “target 으로 jump 해라.” 정도로 기억하면 된다. iptables -A INPUT -s 192.168.21.1/32 -j ACCEPT
# INPUT chain 에 있는 녀석을 지워라 - 10 번째 있는 rule
iptables -D INPUT 10
# INPUT 1 위치에 다음 rule을 insert 한다. - protocol 이 tcp 이고 port 가 80 인 녀석을 ACCEPT 로 jump 한다. iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT
Apache ---> MySQL
#!/bin/bash # setup basic chains and allow all or we might get locked out while the rules are running... iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # clear rules iptables -F # allow HTTP inbound and replies iptables -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # allow HTTPS inbound and replies iptables -A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT # limit ssh connects to 10 every 10 seconds # change the port 22 if ssh is listening on a different port (which it should be) # in the instance's AWS Security Group, you should limit SSH access to just your IP # however, this will severely impede a password crack attempt should the SG rule be misconfigured iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 10 --hitcount 10 -j DROP # allow SSH inbound and replies # change the port 22 if ssh is listening on a different port (which it should be) iptables -A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # root can initiate HTTP outbound (for yum) iptables -A OUTPUT -p tcp --dport 80 -m owner --uid-owner root -m state --state NEW,ESTABLISHED -j ACCEPT # anyone can receive replies (ok since connections can't be initiated) iptables -A INPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # root can do DNS searches (if your Subnet is 10.0.0.0/24 AWS DNS seems to be on 10.0.0.2) # if your subnet is different, change 10.0.0.2 to your value (eg a 172.31.1.0/24 Subnet would be 172.31.1.2) # see http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html # DNS = start subnet range "plus two" iptables -A OUTPUT -p udp --dport 53 -m owner --uid-owner root -d 10.0.0.2/32 -j ACCEPT iptables -A INPUT -p udp --sport 53 -s 10.0.0.2/32 -j ACCEPT # apache user can talk to rds server on 10.0.0.200:3306 iptables -A OUTPUT -p tcp --dport 3306 -m owner --uid-owner apache -d 10.0.0.200 -j ACCEPT iptables -A INPUT -p tcp --sport 3306 -s 10.0.0.200 -j ACCEPT # now drop everything else iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # save config /sbin/service iptables save
su -c chmod 440 /proc/net/unix
if not "%JSSE_OPTS%" == "" goto gotJsseOpts set "JSSE_OPTS=-Djdk.tls.ephemeralDHKeySize=2048 -Duser.language=en -Duser.region=US" :gotJsseOpts set "JAVA_OPTS=%JAVA_OPTS% %JSSE_OPTS%"
# encoding=utf8 """ Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) BSD License """ import numpy as np import codecs # data I/O with codecs.open('input.txt', 'r', encoding='utf-8') as fp: data = fp.read() # data = open('input.txt', 'r').read() # should be simple plain text file chars = list(set(data)) data_size, vocab_size = len(data), len(chars) print ('data has %d characters, %d unique.' % (data_size, vocab_size)) char_to_ix = { ch:i for i,ch in enumerate(chars) } ix_to_char = { i:ch for i,ch in enumerate(chars) } # hyperparameters hidden_size = 100 # size of hidden layer of neurons seq_length = 25 # number of steps to unroll the RNN for learning_rate = 1e-1 # model parameters Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output bh = np.zeros((hidden_size, 1)) # hidden bias by = np.zeros((vocab_size, 1)) # output bias def lossFun(inputs, targets, hprev): """ inputs,targets are both list of integers. hprev is Hx1 array of initial hidden state returns the loss, gradients on model parameters, and last hidden state """ xs, hs, ys, ps = {}, {}, {}, {} hs[-1] = np.copy(hprev) loss = 0 # forward pass for t in range(len(inputs)): xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation xs[t][inputs[t]] = 1 hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss) # backward pass: compute gradients going backwards dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why) dbh, dby = np.zeros_like(bh), np.zeros_like(by) dhnext = np.zeros_like(hs[0]) for t in reversed(range(len(inputs))): dy = np.copy(ps[t]) dy[targets[t]] -= 1 # backprop into y. see http://cs231n.github.io/neural-networks-case-study/#grad if confused here dWhy += np.dot(dy, hs[t].T) dby += dy dh = np.dot(Why.T, dy) + dhnext # backprop into h dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity dbh += dhraw dWxh += np.dot(dhraw, xs[t].T) dWhh += np.dot(dhraw, hs[t-1].T) dhnext = np.dot(Whh.T, dhraw) for dparam in [dWxh, dWhh, dWhy, dbh, dby]: np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1] def sample(h, seed_ix, n): """ sample a sequence of integers from the model h is memory state, seed_ix is seed letter for first time step """ x = np.zeros((vocab_size, 1)) x[seed_ix] = 1 ixes = [] for t in range(n): h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh) y = np.dot(Why, h) + by p = np.exp(y) / np.sum(np.exp(y)) ix = np.random.choice(range(vocab_size), p=p.ravel()) x = np.zeros((vocab_size, 1)) x[ix] = 1 ixes.append(ix) return ixes n, p = 0, 0 mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why) mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0 while True: # prepare inputs (we're sweeping from left to right in steps seq_length long) if p+seq_length+1 >= len(data) or n == 0: hprev = np.zeros((hidden_size,1)) # reset RNN memory p = 0 # go from start of data inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]] targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]] # sample from the model now and then if n % 100 == 0: sample_ix = sample(hprev, inputs[0], 200) txt = ''.join(ix_to_char[ix] for ix in sample_ix) print ('----\n %s \n----' % (txt, )) # forward seq_length characters through the net and fetch gradient loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev) smooth_loss = smooth_loss * 0.999 + loss * 0.001 if n % 100 == 0: print ('iter %d, loss: %f' % (n, smooth_loss)) # print progress # perform parameter update with Adagrad for param, dparam, mem in zip([Wxh, Whh, Why, bh, by], [dWxh, dWhh, dWhy, dbh, dby], [mWxh, mWhh, mWhy, mbh, mby]): mem += dparam * dparam param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update p += seq_length # move data pointer n += 1 # iteration counterfdsfds
---- beahngy amo k ns aeo?cdse nh a taei.rairrhelardr nela haeiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nol e iohahenasen ---- iter 9309400, loss: 0.000086 ---- e nh a taei.rairrhelardr naioa aneaa ayio pe e bhnte ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cds ---- iter 9309500, loss: 0.000086 ---- jCTCnhoofeoxelif edElobe negnk e iohehasenoldndAmdaI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cds ---- iter 9309600, loss: 0.000086 ---- negnk e iohehasenoldndAmdaI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr ---- iter 9309700, loss: 0.000086 ---- aI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr neli ae e angnI hyho gben ---- iter 9309800, loss: 0.000086 ---- gehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nela dr iohecgrpiahe. Ddelnss.eelaishaner” cot AA ---- iter 9309900, loss: 0.000086 ---- piahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nol e iohahenasenese hbea bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a t ---- iter 9310000, loss: 0.000086 ---- er” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nela hamnaI ayio pe e h’e btentmuhgnhi beahe Ddabealohe bee amoi bcgdltt. gey heho grpiahe. Ddeln ---- iter 9310100, loss: 0.000086 ---- bih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nol gyio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae ---- iter 9310200, loss: 0.000086 ---- beahngy amo k ns aeo?cdse nh a taei.rairrhelardr ntlhnegnns. e amo k ns aeh?cdse nh a taei.rairrhelardr nol e iohehengrpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeah ---- iter 9310300, loss: 0.000086 ---- e nh a taei.rairrhelardr nol’e btentmuhgehi gcdslatha arenbggcodaeta tehr he ni.rhelaney gehnha e ar i ho bee amote ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nol nyio chge heiohecgr ---- iter 9310400, loss: 0.000086 ---- jCTCnhoofeoxelif edElobe negnk e iohehasenoldndAmdaI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cds ---- iter 9310500, loss: 0.000086 ---- negnk e iohehasenoldndAmdaI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr ---- iter 9310600, loss: 0.000086 ---- aI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nelardae abeahngy amo k ---- iter 9310700, loss: 0.000086 ---- gehi bcgdltt. gey heho grpiahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr ntl negnk t hi rsnse nhk br ne” a naeiarairr elirs ---- iter 9310800, loss: 0.000086 ---- piahe. Ddelnss.eelaishaner” cot AAfhB ht ltny ehbih a”on bhnte ectrsnae abeahngy amo k ns aeo?cdse nh a taei.rairrhelardr nelardaenabeahngelareierhi. aif edElobe negrcih gey gey heho grpiahe. Ddel ----